不同的缓冲区计数:第二次运行同一查询时共享命中

Different count of Buffers: shared hit when run the same query second time

提问人:Anton P 提问时间:11/8/2023 最后编辑:SelVaziAnton P 更新时间:11/12/2023 访问量:48

问:

PostgreSQL 13.7 (Debian 13.7-1.pgdg110+1) x86_64-pc-linux-gnu,由 gcc 编译 (Debian 10.2.1-6) 10.2.1 20210110,64 位

auto_explain.log_analyze:真

auto_explain.log_buffers:真

postgresql 日志显示一个带有执行计划的长时间运行的查询:

2023-11-03 08:24:51.481 MSK [1094142] User: otrs; Session: 65447e70.10b1fe; DB: otrs; Host: xxx.xxx.xxx.xxx(41236) LOG:  duration: 8176.464 ms  plan:
        Query Text: SELECT DISTINCT st.id, st.tn, st.create_time FROM ticket st INNER JOIN dynamic_field_value dfv1
                                ON (st.id = dfv1.object_id
                                    AND dfv1.field_id = 200)  WHERE 1=1 AND ( LOWER(dfv1.value_text) = LOWER('CCS-4180967 [9954762722309]') ) ORDER BY st.create_time DESC LIMIT 1
        Limit  (cost=1001.23..2173.05 rows=1 width=30) (actual time=8121.116..8176.458 rows=0 loops=1)
          Buffers: shared hit=13853718
          ->  Unique  (cost=1001.23..2397368.81 rows=2045 width=30) (actual time=8121.114..8176.455 rows=0 loops=1)
                Buffers: shared hit=13853718
                ->  Nested Loop  (cost=1001.23..2397353.47 rows=2045 width=30) (actual time=8121.113..8176.454 rows=0 loops=1)
                      Buffers: shared hit=13853718
                      ->  Gather Merge  (cost=1000.80..621301.27 rows=3339010 width=30) (actual time=64.816..1305.832 rows=3728611 loops=1)
                            Workers Planned: 5
                            Workers Launched: 5
                            Buffers: shared hit=2209941
                            ->  Incremental Sort  (cost=0.72..218201.54 rows=667802 width=30) (actual time=4.200..1302.012 rows=621435 loops=6)
                                  Sort Key: st.create_time DESC, st.id, st.tn
                                  Presorted Key: st.create_time
                                  Full-sort Groups: 4947  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  Buffers: shared hit=2209941
                                  Worker 0:  Full-sort Groups: 22336  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  Worker 1:  Full-sort Groups: 22341  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  Worker 2:  Full-sort Groups: 22273  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  Worker 3:  Full-sort Groups: 22254  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  Worker 4:  Full-sort Groups: 22256  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
                                  ->  Parallel Index Scan using ticket_create_time_desc_idx on ticket st  (cost=0.43..188150.45 rows=667802 width=30) (actual time=0.244..1076.668 rows=621435 loops=6)
                                        Buffers: shared hit=2209436
                      ->  Index Scan using dynamic_field_value_field_values on dynamic_field_value dfv1  (cost=0.43..0.52 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=3728611)
                            Index Cond: ((object_id = st.id) AND (field_id = 200))
                            Filter: (lower((value_text)::text) = 'ccs-4180967 [9954762722309]'::text)
                            Rows Removed by Filter: 0
                            Buffers: shared hit=11643777

几分钟后,我为此查询运行 EXPLAIN (ANALYZE, BUFFERS) 并获得相同的执行计划,但使用另一个计数的 Buffers: shared hit 和运行速度要快得多:

        Limit  (cost=1001.23..2173.08 rows=1 width=30) (actual time=13.219..25.331 rows=1 loops=1)"
          Buffers: shared hit=14869"
          ->  Unique  (cost=1001.23..2397423.81 rows=2045 width=30) (actual time=13.218..25.329 rows=1 loops=1)"
                Buffers: shared hit=14869"
                ->  Nested Loop  (cost=1001.23..2397408.47 rows=2045 width=30) (actual time=13.216..25.327 rows=1 loops=1)"
                      Buffers: shared hit=14869"
                      ->  Gather Merge  (cost=1000.80..621311.17 rows=3339010 width=30) (actual time=10.033..24.206 rows=250 loops=1)"
                            Workers Planned: 5"
                            Workers Launched: 5"
                            Buffers: shared hit=13977"
                            ->  Incremental Sort  (cost=0.72..218211.44 rows=667802 width=30) (actual time=2.592..8.813 rows=68 loops=6)"
                                  Sort Key: st.create_time DESC, st.id, st.tn"
                                  Presorted Key: st.create_time"
                                  Full-sort Groups: 2  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  Buffers: shared hit=13977"
                                  Worker 0:  Full-sort Groups: 3  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  Worker 1:  Full-sort Groups: 3  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  Worker 2:  Full-sort Groups: 4  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  Worker 3:  Full-sort Groups: 2  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  Worker 4:  Full-sort Groups: 4  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB"
                                  ->  Parallel Index Scan using ticket_create_time_desc_idx on ticket st  (cost=0.43..188160.35 rows=667802 width=30) (actual time=0.196..8.521 rows=97 loops=6)"
                                        Buffers: shared hit=13472"
                      ->  Index Scan using dynamic_field_value_field_values on dynamic_field_value dfv1  (cost=0.43..0.52 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=250)"
                            Index Cond: ((object_id = st.id) AND (field_id = 200))"
                            Filter: (lower((value_text)::text) = 'ccs-4180967 [9954762722309]'::text)"
                            Rows Removed by Filter: 1"
                            Buffers: shared hit=892"
"Planning:"
"  Buffers: shared hit=18"
"Planning Time: 0.642 ms"
"Execution Time: 25.391 ms"

“票证”和“dynamic_field_value”表是新鲜分析的。

为什么 Buffers:shared 命中数差异如此之大?

PostgreSQL 查询优化 psql

评论


答:

1赞 jjanes 11/12/2023 #1

我之前的回答忽略了这里真正明显的区别。第一次处决必须经过3728611张罚单,然后才能找到一张与它可以保留的dynamic_field_value配对的罚单(实际上,它甚至从未找到过),而第二次处决只需要经过 250 张罚单即可找到饲养员。因此,第一个必须命中更多的缓冲区,因为它必须做更多的工作。(我上下滚动以查看缓冲区计数的发散位置,但不向左和向右滚动以查看实际行计数也发散)

据推测,发生的情况是,第一个查询以某种方式导致下游生成一个新票证,现在,当您使用这个略有更改的数据重复查询时,该新票证是第一个要测试的票证之一,并且它会提前停止。或者,它会导致一些dynamic_field_value被改变,因此它现在满足了“较低”的条件,而以前它没有。当您拥有 时,数据中的微小变化可能会导致性能的巨大变化。当然,只有几分钟的时间足以让数据发生微小的变化,特别是如果变化是故意发生的,而不是随机发生的。LIMIT 1