OTel News

February 2026

February 2026

Curated by:

Welcome to the February 2026 edition of the OpenTelemetry News!

February was a stabilization-heavy month for OpenTelemetry. Declarative Configuration reached its first stable release, Semantic Conventions continued expanding into service metadata and GenAI domains, and new protocol discussions signaled continued attention to performance and efficiency. With KubeCon EU around the corner, the ecosystem feels both mature and forward-looking.

Highlights

KubeCon + CloudNativeCon EU 2026

KubeCon + CloudNativeCon EU 2026 takes place March 23–26 in Amsterdam, and OpenTelemetry will be everywhere.

The week kicks off with the Maintainers Summit on March 22, followed by Observability Day on March 23. Observability Day brings together project updates, schema evolution discussions, scaling strategies, and sessions on AI agents and next-generation telemetry use cases. If you’re looking for a concentrated signal on where observability is heading, this is the day to prioritize.

Several Datadog speakers will be presenting throughout the week. If you’re attending, feel free to connect with the team on-site.

A full list of OTel-related talks is available in the KubeCon EU 2026 OpenTelemetry blog post.

Declarative Configuration is now stable

This is a significant milestone for cross-language SDK consistency. With backward compatibility guaranteed for minor releases, this moves the configuration model from “experimental” to “production-ready.” Expect broader SDK adoption and ecosystem tooling to follow.

If you’ve been waiting for stronger guarantees before adopting Declarative Configuration in production, this milestone provides them.

More details are available in the opentelemetry-configuration repository.


New Collector Releases

This news edition covers the OpenTelemetry Collector releases 0.145.0 and 0.146.0.

Rather than enumerating every change, here are the updates most likely to impact operators and platform teams.

Platform & Stability Signals

  • Improved exporter failure diagnostics (#13956)
    When telemetry level is set to detailed, otelcol_exporter_send_failed_* metrics now include:

    • error.type

    • error.permanent

    These attributes standardize error classification across gRPC status codes, Go context errors, and collector-specific failures. For teams operating large pipelines, this improves alerting precision and reduces guesswork during incident response.

Kubernetes Semantics Migration Continues

  • processor/k8s_attributes: Introduced semantic-convention-compliant feature gates to support migration to stable Kubernetes attributes (#44693)
    This update enables a controlled transition from legacy Kubernetes attribute naming (plural form) to the new stable semantic conventions (singular form). It is part of the broader effort to stabilize k8s resource/entity attributes.
    Two new feature gates were introduced (both alpha and disabled by default):

    • processor.k8sattributes.EmitV1K8sConventions
      Enables emission of the new stable singular-form attributes, for example:
      k8s.<workload>.label.<key>
      k8s.<workload>.annotation.<key>

    • processor.k8sattributes.DontEmitV0K8sConventions
      Disables the legacy plural-form attributes, for example:
      k8s.<workload>.labels.<key>
      k8s.<workload>.annotations.<key>

    Migration behavior:

    • Enable only EmitV1K8sConventions -> both legacy and stable attributes are emitted.
    • Enable both feature gates -> only stable attributes are emitted.
    • Attempting to disable legacy attributes without enabling stable ones is now rejected by validation.

    Unless you are actively testing upcoming Kubernetes semantic conventions, most users should keep these feature gates disabled until the conventions are finalized and declared stable.

  • receiver/k8s_cluster: New opt-in service metrics derived from the Kubernetes Service and EndpointSlice APIs (#45620)

    These metrics provide visibility into:

    • Endpoint readiness states
    • Load balancer ingress counts

    For teams debugging traffic distribution or readiness issues, this adds meaningful observability at the service abstraction layer.

  • Renamed k8sattributes processor to k8s_attributes processor and added deprecated alias k8sattributes (#45894)
    This aligns naming conventions across processors. Existing configurations using k8sattributes will continue to work, but you should plan to migrate to the new name.

Metric Accuracy Update

  • receiver/hostmetrics: process.context_switches now counts context switches across all threads (#36804)
    Previously, only the lead thread was counted. Although this is marked as a breaking change due to metric shifts, it corrects undercounting behavior. Expect noticeable differences in dashboards and alerts due to higher reported values.

Experimentation & Advanced Tooling

  • receiver/vcr: Verbatim Capture & Replay (VCR) (#42877)
    VCR enables full-fidelity capture and replay of telemetry streams over defined time windows.

    This is particularly interesting for:

    • Education & Demo Environments
    • Performance, Load & Incident Reproduction
    • Observability Tooling Development
    • AI/ML Model Training & Analysis

    Replayable telemetry opens up new possibilities for deterministic debugging and performance validation.

  • extension/opamp: Added support for the AcceptsRestartCommand capability (#45056)
    When enabled, remote restart commands trigger a SIGHUP signal to reload configuration.

    This capability moves the Collector further toward centralized fleet management models. Note that it is behind a feature gate and currently not supported on Windows systems.

OpenTelemetry Collector Builder (OCB)

OCB is receiving some improvements to enhance the experience of building custom Collector distributions. Those were inspired by OTel Unplugged discussions and include:

  • ocb init (#14530): initializes a new repository in the provided folder with a manifest to start building a custom Collector. This command is experimental and may evolve, but it simplifies bootstrapping custom Collector distributions.

  • Configurable telemetry provider (#14575): OCB manifests now support a telemetry field to specify a custom telemetry provider module. The provider's factory is injected into the generated Collector binary, replacing the default. This is especially useful for constrained targets such as WebAssembly, where swapping in a no-op provider can meaningfully reduce binary size.


  • exporter/datadog:

    • OTLP logs now preserve array-type attributes (#45882)
      This improves structural fidelity when exporting complex log attributes.

    • Fixed a data race that could cause crashes during span processing (#46051)

  • processor/datadogsemantics: Deprecated (#46052)
    If you rely on this component, please contact Datadog support.


Community Proposals

STEF: A High-Performance Telemetry Protocol

At the February 18 Collector SIG, Tigran Najaryan(long time OTel contributor and member of the Technical Committee) presented STEF, a columnar telemetry protocol designed for improved wire efficiency and serialization speed.

Early benchmarks indicate:

  • Significantly reduced payload sizes
  • Serialization speeds an order of magnitude faster than protobuf

A metrics receiver and exporter are already available in collector-contrib. Support for traces and logs is planned, and a Java implementation is underway.

If performance trade-offs around OTLP have ever been a concern in your environment, STEF is worth watching. The possibility of donating it to OpenTelemetry was also discussed — highlighting that protocol evolution remains an active discussion within the community.

Learn more via the slides and repository.

Collector MCP Server

A proposal to build an MCP (Model Context Protocol) server for the OpenTelemetry Collector has been approved by the Governance Committee, with scope focused on collector use cases. The goal is to help AI agents read and write valid collector configurations, understand API breaking changes, upgrade the collector, and troubleshoot issues.

The project is looking for contributors. If you're interested in contributing, please join the Collector SIG.

Profiling Signal Progress

A key proto-level PR for the Profiling signal was merged in late February: opentelemetry-proto#733 introducing reference-based attributes for profiling data.

Rather than repeating full string values for resource attributes across every payload, this change uses a dictionary lookup table — data points reference strings by integer index instead of carrying the full string inline. This is particularly impactful for workloads with frequent process forking (common in Kubernetes environments) where resource attributes are repeated thousands of times per payload.

Benchmarks produced by Felix Geisendörfer and Nayef Ghattas (both Datadog) across k8s workloads showed:

  • 40% reduction in uncompressed payload size
  • 4% reduction in gzip-compressed size

The change currently applies only to the Profiling signal. Stable signals (Traces, Metrics, Logs) are unaffected. The design intentionally leaves the door open for other signals to adopt dictionary encoding in the future, without requiring breaking changes today.

Check out the full benchmark details >


Prometheus–OTel Interop

Two specification PRs merged in February formalize how Prometheus metrics map to OTLP — a long-standing source of ambiguity for teams bridging the two ecosystems.

  • Prometheus Counter -> OTLP Monotonic Sum is now stable (#4862): The conversion of Prometheus Counters to OTLP Monotonic Sums is now a stable part of the spec. Teams scraping Prometheus endpoints and forwarding to OTLP backends can rely on this behavior without worrying about future breaking changes.

  • Prometheus Gauge -> OTLP Gauge is now stable (#4871): The mapping of Prometheus Gauges to OTLP Gauges is also now stable. This closes the loop on the most common Prometheus metric types and gives implementers a solid foundation for interoperability.

With both conversions stabilized, tools that bridge Prometheus and OpenTelemetry have a well-defined contract to build against.


Semantic Conventions v1.40.0

The v1.40.0 release includes breaking changes and notable expansions across databases, GenAI, services, and Kubernetes domains.

One attribute stands out: service.criticality.

This attribute allows services to declare operational importance (critical, high, medium, low). Standardizing criticality at the semantic level enables:

  • Criticality-aware sampling strategies
  • Priority-based alerting
  • Context-aware incident response

Large platform teams have often implemented similar logic internally. Bringing it into the semantic layer is a meaningful step toward ecosystem-wide consistency.

Declarative semconv version selection (#3424): A new Declarative Configuration schema defines how instrumentation libraries should expose semconv stability controls — superseding OTEL_SEMCONV_STABILITY_OPT_IN. Per-domain settings (version, experimental, dual_emit) give operators fine-grained migration control without relying on environment variables.

Combined with Declarative Configuration reaching stability this month, semantic version selection becomes significantly easier to manage across environments.


Datadog News

OpenTelemetry API Support

Datadog SDKs provide an implementation of the OpenTelemetry API for traces, metrics, and logs. This means you can maintain vendor-neutral instrumentation of your services, while still taking advantage of Datadog’s native implementation, features, and products.
Read the docs >


Get Involved

Want to contribute to OpenTelemetry? Here are some ways to get started:

Resources


Did we miss something? If you have news to share or want to contribute to the next edition, please reach out to us via otel-news@datadoghq.com.