Skip to content

kafka-consumer(ticdc): tolerate replayed resolved and DDL events (#12596)#12619

Open
ti-chi-bot wants to merge 1 commit into
pingcap:release-7.5from
ti-chi-bot:cherry-pick-12596-to-release-7.5
Open

kafka-consumer(ticdc): tolerate replayed resolved and DDL events (#12596)#12619
ti-chi-bot wants to merge 1 commit into
pingcap:release-7.5from
ti-chi-bot:cherry-pick-12596-to-release-7.5

Conversation

@ti-chi-bot
Copy link
Copy Markdown
Member

This is an automated cherry-pick of #12596

What problem does this PR solve?

Issue Number: close #12595

What is changed and how it works?

  • treat replayed resolved/checkpoint fallback in cmd/kafka-consumer as duplicate delivery instead of a fatal error
  • deduplicate replayed DDL events by logical DDL identity instead of pointer identity
  • add regression tests covering replayed resolved/checkpoint handling and equivalent versus split DDL events

Check List

Tests

  • Unit test
  • Manual test

Questions

Will it cause performance regression or break compatibility?

No. This only makes the standalone Kafka consumer tolerate duplicate MQ delivery in line with TiCDC's at-least-once behavior.

Do you need to update user documentation, design documentation or monitoring documentation?

No.

Release note

Fix `cdc_kafka_consumer` to tolerate replayed resolved/checkpoint and equivalent DDL messages under duplicate MQ delivery.

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot ti-chi-bot added lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-7.5 This PR is cherry-picked to release-7.5 from a source PR. labels Apr 24, 2026
@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

This cherry pick PR is for a release branch and has not yet been approved by triage owners.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick:

  1. It must be LGTMed and approved by the reviewers firstly.
  2. For pull requests to TiDB-x branches, it must have no failed tests.
  3. AFTER it has lgtm and approved labels, please wait for the cherry-pick merging approval from triage owners.
Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kennytm for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot Bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 24, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Kafka consumer writer and partition progress tracking system to handle DDL and DML events. The review feedback highlights critical concurrency concerns, specifically data races on shared fields in the writer and partitionProgress structs that require synchronization. Furthermore, the reviewer identified a likely compilation error regarding the Seq field, a performance issue with a busy-wait loop in syncFlushRowChangedEvents, and suggested several minor improvements to code structure and log clarity.

Comment on lines +125 to +141
type writer struct {
option *option

ddlList []*model.DDLEvent
ddlWithMaxCommitTs *model.DDLEvent
// ddlKeysWithMaxCommitTs records every logical DDL seen at the current
// maximum CommitTs, so replayed prefixes of split DDL sequences can be
// ignored without collapsing distinct DDLs that share the same CommitTs.
ddlKeysWithMaxCommitTs map[ddlEventKey]struct{}
ddlSink ddlsink.Sink

// sinkFactory is used to create table sink for each table.
sinkFactory *eventsinkfactory.SinkFactory
progresses []*partitionProgress

eventRouter *dispatcher.EventRouter
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The writer struct is accessed concurrently from multiple partition processing goroutines (via WriteMessage), but it lacks synchronization for its shared fields such as ddlList, ddlWithMaxCommitTs, and ddlKeysWithMaxCommitTs. This will lead to data races and inconsistent state. You should add a sync.Mutex to protect these fields, similar to the implementation in the original Consumer struct in main.go.

startTs: ddl.StartTs,
commitTs: ddl.CommitTs,
query: ddl.Query,
seq: ddl.Seq,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The model.DDLEvent struct as defined in cdc/model/sink.go (lines 699-710) does not have a Seq field. Accessing ddl.Seq here and in the regression tests will cause a compilation error. Please verify if Seq needs to be added to the DDLEvent model or if another field was intended.

Comment on lines +76 to +77
watermark uint64
watermarkOffset kafka.Offset
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The watermark and watermarkOffset fields in partitionProgress are updated in updateWatermark and read in getMinWatermark concurrently across different goroutines. Access to these fields should be synchronized using sync/atomic or a mutex to ensure memory safety and visibility across threads.

Comment on lines +543 to +562
for {
select {
case <-ctx.Done():
log.Warn("sync flush row changed event canceled", zap.Error(ctx.Err()))
return
default:
}
flushedResolvedTs := true
for _, tableSink := range progress.tableSinkMap {
if err := tableSink.UpdateResolvedTs(resolvedTs); err != nil {
log.Panic("Failed to update resolved ts", zap.Error(err))
}
if tableSink.GetCheckpointTs().Less(resolvedTs) {
flushedResolvedTs = false
}
}
if flushedResolvedTs {
return
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The loop in syncFlushRowChangedEvents is a busy-wait loop that can cause high CPU usage if the downstream sink takes time to progress. It repeatedly calls UpdateResolvedTs and checks the checkpoint without any yielding. Adding a small sleep inside the loop would significantly reduce CPU overhead.

for {
		select {
		case <-ctx.Done():
			log.Warn("sync flush row changed event canceled", zap.Error(ctx.Err()))
			return
		default:
		}
		flushedResolvedTs := true
		for _, tableSink := range progress.tableSinkMap {
			if err := tableSink.UpdateResolvedTs(resolvedTs); err != nil {
				log.Panic("Failed to update resolved ts", zap.Error(err))
			}
			if tableSink.GetCheckpointTs().Less(resolvedTs) {
				flushedResolvedTs = false
			}
		}
		if flushedResolvedTs {
			return
		}
		time.Sleep(10 * time.Millisecond)
	}

Comment on lines +179 to +184
for i := 0; i < int(o.partitionNum); i++ {
if err != nil {
log.Panic("cannot create the decoder", zap.Error(err))
}
w.progresses[i] = newPartitionProgress(int32(i), decoder)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error check for NewDecoder is redundant inside the loop since err is not modified within the loop. It should be checked once outside the loop for better clarity.

Suggested change
for i := 0; i < int(o.partitionNum); i++ {
if err != nil {
log.Panic("cannot create the decoder", zap.Error(err))
}
w.progresses[i] = newPartitionProgress(int32(i), decoder)
}
if err != nil {
log.Panic("cannot create the decoder", zap.Error(err))
}
for i := 0; i < int(o.partitionNum); i++ {
w.progresses[i] = newPartitionProgress(int32(i), decoder)
}

// for DDL a / b received in the order, a.CommitTs < b.CommitTs should be true.
func (w *writer) appendDDL(ddl *model.DDLEvent, offset kafka.Offset) {
// DDL CommitTs fallback, just crash it to indicate the bug.
if w.ddlWithMaxCommitTs != nil && ddl.CommitTs < w.ddlWithMaxCommitTs.CommitTs {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is outdated and contradicts the PR's objective of tolerating replayed events. It should be updated to reflect that fallbacks are now tolerated and ignored.

// DDL CommitTs fallback, ignore it to tolerate duplicate delivery.

if o.upstreamTiDBDSN != "" {
db, err = openDB(ctx, o.upstreamTiDBDSN)
if err != nil {
log.Panic("cannot open the upstream TiDB, handle key only enabled",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Typo in the log message: "handle key only enabled" should be "handle-key-only enabled" to match the configuration terminology.

Suggested change
log.Panic("cannot open the upstream TiDB, handle key only enabled",
log.Panic("cannot open the upstream TiDB, handle-key-only enabled",

@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

@ti-chi-bot: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-verify 21beaa4 link true /test pull-verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/cherry-pick-not-approved lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-7.5 This PR is cherry-picked to release-7.5 from a source PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants