Skip to content

Conversation

@AngersZhuuuu
Copy link
Contributor

What changes were proposed in this pull request?

Spark jobs driver side some times show below message

java.lang.OutOfMemoryError: GC overhead limit exceeded

But Spark executor page didn't show driver GC metrics, When Driver GC is heavy, all executors may be waiting for scheduling, resulting in wasted resources. We need to optimize for heavy GC scenarios. We need to know this information very easily.

We can add it

截屏2025-12-24 17 52 44

Why are the changes needed?

To help users better understand their assignments

Does this PR introduce any user-facing change?

User can know driver side gc time in executors page

How was this patch tested?

Before
截屏2025-12-24 17 52 44

After
截屏2025-12-24 17 49 07

Was this patch authored or co-authored using generative AI tooling?

No

@github-actions github-actions bot added the CORE label Dec 24, 2025
@yaooqinn
Copy link
Member

Making driver's GC observable makes sense to me, but simply collecting to task GCs is a bit tricky to me

@AngersZhuuuu
Copy link
Contributor Author

Making driver's GC observable makes sense to me, but simply collecting to task GCs is a bit tricky to me

hmm yea, add a new column in these two table?

@yaooqinn
Copy link
Member

I don't think it's eligible

@AngersZhuuuu
Copy link
Contributor Author

I don't think it's eligible

Any good suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants