fix panic due to empty DestinationRule LoadBalancerPolicy#580
fix panic due to empty DestinationRule LoadBalancerPolicy#580Abirdcfly wants to merge 1 commit intokubeedge:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @Abirdcfly, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug that caused a runtime panic in the load balancer component when processing Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Signed-off-by: Abirdcfly <fp544037857@gmail.com>
There was a problem hiding this comment.
Code Review
This pull request correctly addresses a panic that occurs when a DestinationRule without a LoadBalancerPolicy is updated. The added check prevents a nil pointer dereference, and the fix is effective. The other changes are stylistic formatting improvements.
While reviewing, I noticed a couple of areas for potential improvement in related code:
- There appears to be a similar potential for a panic in
OnDestinationRuleDelete. If aDestinationRulewithout a policy is deleted,lb.policyMap[svcPort]could benil, leading to a panic onlb.policyMap[svcPort].Release(). It would be beneficial to address this in a follow-up PR to make theDestinationRulehandling more robust. - The logic in
OnDestinationRuleUpdateseems to recreate the load balancer policy object on every update (by callingsetLoadBalancerPolicyunconditionally), even when the policy type hasn't changed. This can be inefficient, especially forConsistentHashPolicywhich rebuilds its hash ring. Refactoring this to only update the existing policy object when the type is the same would improve performance. This could also be addressed in a follow-up.
Overall, the change in this PR is a good and necessary fix.
What type of PR is this?
/kind bug
What this PR does / why we need it:
panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x1d8f337] goroutine 303 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x1?}) /code/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0xd7 panic({0x20a2ca0, 0x3c91910}) /usr/local/go/src/runtime/panic.go:884 +0x212 github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).OnDestinationRuleUpdate(0xc000293130, 0xc0000bdc00?, 0xc002ba2000) /code/pkg/loadbalancer/loadbalancer.go:778 +0x497 github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).handleUpdateDestinationRule(0xc000483040?, {0x23d50c0?, 0xc00021d4d8?}, {0x23d50c0?, 0xc002ba2000?}) /code/pkg/loadbalancer/loadbalancer.go:376 +0x65 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...) /code/vendor/k8s.io/client-go/tools/cache/controller.go:238 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:785 +0xf7 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x0?) /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00361ff38?, {0x29de7a0, 0xc0031fc1e0}, 0x1, 0xc0007372c0) /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00361ff88?) /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/cache.(*processorListener).run(0xc002ee6400?) /code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:781 +0x6b k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x5a created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start /code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x85due to DestinationRule has no LoadBalancerPolicy, note this DestinationRule may not relate with kubeedge or edge computing.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: