Skip to content

Conversation

@rafaelwestphal
Copy link
Contributor

@rafaelwestphal rafaelwestphal commented Oct 15, 2025

Description

Related issue

How has this been tested?

Checklist:

  • Unit tests
    • Unit tests do not apply.
    • Unit tests have been added/modified and passed for this PR.
  • Integration tests
    • Integration tests do not apply.
    • Integration tests have been added/modified and passed for this PR.
  • Documentation
    • This PR introduces no user visible changes.
    • This PR introduces user visible changes and the corresponding documentation change has been made.
  • Minor version bump
    • This PR introduces no new features.
    • This PR introduces new features, and there is a separate PR to bump the minor version since the last release already.
    • This PR bumps the version.

@quentinmit
Copy link
Member

Why is this using otlphttp instead of otlpgrpc?

@rafaelwestphal
Copy link
Contributor Author

rafaelwestphal commented Oct 16, 2025

Why is this using otlphttp instead of otlpgrpc?

otlpgrpc doesn't allow us to set the user agent, but otlphttp does. we are using otlphttp until that gets implemented. iirc from the initial analysis, implementing that was not trivial

@rafaelwestphal rafaelwestphal requested review from a team, franciscovalentecastro and ridwanmsharif and removed request for a team October 22, 2025 15:06
@rafaelwestphal rafaelwestphal marked this pull request as ready for review October 22, 2025 15:07
return ConvertToOtlpExporter(pipeline, ctx, false, true)
}

func ConvertToOtlpExporter(pipeline otel.ReceiverPipeline, ctx context.Context, isPrometheus bool, isSystem bool) otel.ReceiverPipeline {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is great that now all logic to setup the OTLP exporter instead of the GCM exporter is contained within ConvertToOtlpExporter!

Question

I want to followup on this comment made above :

The problem is that the transformation is different depending if the current exporter is configured as a system or not. If I just declare it as OTLP, I don' t know how to properly setup the pipeline later. It can either be a GMP, GCM system or GCM Otel exporter.

When the migration to the OTLP exporter is fully completed and we remove all the other exporters (System, OTel , GMP) :

  • How are we going to determine which combination of processors (isSystem, isPrometheus) should be applied to each metric pipeline ?
  • Should we add another ReceiverPipeline "property" to differentiate them ?
  • When we remove the GMP exporter, how is the function ConvertPrometheusExporterToOtlpExporter going to be renamed/refactored ?

I bring up this questions, since i feel the solution of using a "converter" (ConvertToOtlpExporter) is considering only the intermediate state where GCM is the main exporter, but will need further refactoring when those are removed. My suggestion here aims to find a solution that won't need any refactoring later.

Suggestion

One idea is to have 3 exporter types which help determine the specific processors needed for each pipeline (while keeping the possibility to store the logic in one place ) :

  • System_OTLP
  • GMP_OTLP
  • OTLP

This keeps the abstraction of a "ReceiverPipeline" while also having a way to differentiate the specific requirements of each metric pipeline.

Copy link
Contributor

@franciscovalentecastro franciscovalentecastro Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Synced offline with @rafaelwestphal. Mentioned eventually we are going to consolidate all OTLP exporter processor logic when the migration is complete. Using a ConvertToOtlpExporter is considered an intermediate step since it enables development to happen for each receiver separately.

Copy link
Contributor

@franciscovalentecastro franciscovalentecastro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo addressing the last comments and all integration tests pass.

if processors, ok := receiverPipeline.Processors["metrics"]; ok && expOtlpExporter {
receiverPipeline.Processors["metrics"] = append(
processors,
otel.MetricStartTime(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Requested change] Create an otlpExporterComponents (or otlpExporterProcessor) function that contains MetricStartTime() and also will serve for future additions of OTLP exporter logic.

}

expOtlpExporter := experimentsFromContext(ctx)["otlp_exporter"]
if processors, ok := receiverPipeline.Processors["metrics"]; ok && expOtlpExporter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Comment] Just as a general comment of our offline discussion, IMO here is the best place to consolidate all OTLP exporter logic (e.g. post-processors) since it resembles how otelSetLogNameComponents is unilaterally added in all logging pipelines.

I believe you can remove all uses of ConvertToOtlpExporter and add all the "OTLP" logic here. Please keep track of this ideas as the migration moves forward.

- grpc.client.attempt.duration_count
- googlecloudmonitoring/point_count
interval/loggingmetrics_7:
interval: 1m
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Requested changed] Add a confgenerator/testdata case for self metrics to track future regressions on "OTLP exporter self metrics" pipelines.

@franciscovalentecastro
Copy link
Contributor

[nit] Add a PR description with details about the changes.

copyProcessor := CopyHostIDToInstanceID()
processorNames = append(processorNames, copyProcessor.name("_global_0"))
processors[copyProcessor.name("_global_0")] = copyProcessor.Config
// Similar to the resource detector, for any pipeline that is using the
Copy link
Contributor

@franciscovalentecastro franciscovalentecastro Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Requested Change]

After seeing the implementation of resourceDetectionProcessorNames above :

Why not create a similar pair of exporterTypeProcessors and exporterTypeProcessorNames to handle the adding processors at the end of every pipeline ?

exporterType := receiverPipeline.ExporterTypes[pipeline.Type]
if name, ok := exporterTypeProcessorNames[exporterType]; ok {
			processorNames = append(processorNames, name)
			processors[name] = exporterTypeProcessors[exporterType].Config
}

This also removes the need to have the expOtlpExporter flag here, since everything is encoded by using the otel.OTLP exporter. Also, the current implementation if expOtlpExporter && resourceDetectionNames is adding the processors in pipelines that don't use the OTLP exporter.

pipelines:
metrics/default__pipeline_hostmetrics:
exporters:
- googlecloud
processors:
- agentmetrics/hostmetrics_0
- filter/hostmetrics_1
- metricstransform/hostmetrics_2
- filter/default__pipeline_hostmetrics_0
- resourcedetection/_global_0
- transform/_global_0
- metricstarttime/default__pipeline_hostmetrics_0
receivers:
- hostmetrics/hostmetrics

[Context]

For Otel Logging, I've found a need (see #2165) for a separate otel.Logs exporter to handle "Log Buffering" settings (which can't be applied to the Metrics Exporters). I also needed to add a Batch processor at the end of the pipeline for "log buffering" and overall "logs export performance".

This is my draft of the idea i propose above. IMO, this is extendable and encodes all this "processor logic" depending on the exporter :

exporterTypeProcessors := map[ExporterType]Component{
Logs: BatchProcessor(),
OTLP: CopyHostIDToInstanceID(), // b/459468648
}
exporterTypeProcessorNames := map[ExporterType]string{
Logs: exporterTypeProcessors[Logs].name("_global_2"),
OTLP: exporterTypeProcessors[OTLP].name("_global_3"),
}
for prefix, pipeline := range c.Pipelines {
// Receiver pipelines need to be instantiated once, since they might have more than one type.
// We do this work more than once if it's in more than one pipeline, but it should just overwrite the same names.
receiverPipeline := c.ReceiverPipelines[pipeline.ReceiverPipelineName]
receiverName := receiverPipeline.Receiver.name(pipeline.ReceiverPipelineName)
var receiverProcessorNames []string
p, ok := receiverPipeline.Processors[pipeline.Type]
if !ok {
// This receiver pipeline isn't for this data type.
continue
}
for i, processor := range p {
name := processor.name(fmt.Sprintf("%s_%d", pipeline.ReceiverPipelineName, i))
receiverProcessorNames = append(receiverProcessorNames, name)
processors[name] = processor.Config
}
receivers[receiverName] = receiverPipeline.Receiver.Config
// Everything else in the pipeline is specific to this Type.
var processorNames []string
processorNames = append(processorNames, receiverProcessorNames...)
for i, processor := range pipeline.Processors {
name := processor.name(fmt.Sprintf("%s_%d", prefix, i))
processorNames = append(processorNames, name)
processors[name] = processor.Config
}
rdm := receiverPipeline.ResourceDetectionModes[pipeline.Type]
if name, ok := resourceDetectionProcessorNames[rdm]; ok {
processorNames = append(processorNames, name)
processors[name] = resourceDetectionProcessors[rdm].Config
}
exporterType := receiverPipeline.ExporterTypes[pipeline.Type]
if name, ok := exporterTypeProcessorNames[exporterType]; ok {
processorNames = append(processorNames, name)
processors[name] = exporterTypeProcessors[exporterType].Config
}
if _, ok := exporterNames[exporterType]; !ok {
exporter := c.Exporters[exporterType]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P.D. exporterTypeProcessors could be of type []Components so multiple processors can be added at the end of the pipeline.

Copy link
Contributor

@franciscovalentecastro franciscovalentecastro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll keep the approval (to not block the PR) modulo addressing the [Requested Change] above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants