Exporters
You are viewing the English version of this page because it has not yet been fully translated. Interested in helping out? See Contributing.
Send telemetry to the OpenTelemetry Collector to make sure it’s exported correctly. Using the Collector in production environments is a best practice. To visualize your telemetry, export it to a backend such as Jaeger, Zipkin, Prometheus, or a vendor-specific backend.
Available exporters
The registry contains a list of exporters for Python.
Among exporters, OpenTelemetry Protocol (OTLP) exporters are designed with the OpenTelemetry data model in mind, emitting OTel data without any loss of information. Furthermore, many tools that operate on telemetry data support OTLP (such as Prometheus, Jaeger, and most vendors), providing you with a high degree of flexibility when you need it. To learn more about OTLP, see OTLP Specification.
This page covers the main OpenTelemetry Python exporters and how to set them up.
If you use zero-code instrumentation, you can learn how to set up exporters by following the Configuration Guide.
OTLP
Collector Setup
If you have a OTLP collector or backend already set up, you can skip this section and setup the OTLP exporter dependencies for your application.
To try out and verify your OTLP exporters, you can run the collector in a docker container that writes telemetry directly to the console.
In an empty directory, create a file called collector-config.yaml
with the
following content:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]
Now run the collector in a docker container:
docker run -p 4317:4317 -p 4318:4318 --rm -v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
This collector is now able to accept telemetry via OTLP. Later you may want to configure the collector to send your telemetry to your observability backend.
Dependencies
If you want to send telemetry data to an OTLP endpoint (like the OpenTelemetry Collector, Jaeger or Prometheus), you can choose between two different protocols to transport your data:
Start by installing the respective exporter packages as a dependency for your project:
pip install opentelemetry-exporter-otlp-proto-http
pip install opentelemetry-exporter-otlp-proto-grpc
Usage
Next, configure the exporter to point at an OTLP endpoint in your code.
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry import metrics
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
# Service name is required for most backends
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint="<traces-endpoint>/v1/traces"))
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)
reader = PeriodicExportingMetricReader(
OTLPMetricExporter(endpoint="<traces-endpoint>/v1/metrics")
)
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry import metrics
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
# Service name is required for most backends
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint="your-endpoint-here"))
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)
reader = PeriodicExportingMetricReader(
OTLPMetricExporter(endpoint="localhost:5555")
)
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)
Console
To debug your instrumentation or see the values locally in development, you can use exporters writing telemetry data to the console (stdout).
The ConsoleSpanExporter
and ConsoleMetricExporter
are included in the
opentelemetry-sdk
package.
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader, ConsoleMetricExporter
# Service name is required for most backends,
# and although it's not necessary for console export,
# it's good to set service name anyways.
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(ConsoleSpanExporter())
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)
reader = PeriodicExportingMetricReader(ConsoleMetricExporter())
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)
There are temporality presets for each instrumentation kind. These presets can
be set with the environment variable
OTEL_EXPORTER_METRICS_TEMPORALITY_PREFERENCE
, for example:
export OTEL_EXPORTER_METRICS_TEMPORALITY_PREFERENCE="DELTA"
The default value for OTEL_EXPORTER_METRICS_TEMPORALITY_PREFERENCE
is
"CUMULATIVE"
.
The available values and their corresponding settings for this environment variable are:
CUMULATIVE
Counter
:CUMULATIVE
UpDownCounter
:CUMULATIVE
Histogram
:CUMULATIVE
ObservableCounter
:CUMULATIVE
ObservableUpDownCounter
:CUMULATIVE
ObservableGauge
:CUMULATIVE
DELTA
Counter
:DELTA
UpDownCounter
:CUMULATIVE
Histogram
:DELTA
ObservableCounter
:DELTA
ObservableUpDownCounter
:CUMULATIVE
ObservableGauge
:CUMULATIVE
LOWMEMORY
Counter
:DELTA
UpDownCounter
:CUMULATIVE
Histogram
:DELTA
ObservableCounter
:CUMULATIVE
ObservableUpDownCounter
:CUMULATIVE
ObservableGauge
:CUMULATIVE
Setting OTEL_EXPORTER_METRICS_TEMPORALITY_PREFERENCE
to any other value than
CUMULATIVE
, DELTA
or LOWMEMORY
will log a warning and set this environment
variable to CUMULATIVE
.
Jaeger
后端设置
Jaeger 原生支持 OTLP,用于接收链路(trace)数据。你可以通过运行一个 Docker 容器来启动 Jaeger,其 UI 默认在端口 16686 上可访问,并在端口 4317 和 4318 上启用 OTLP:
docker run --rm \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
使用方法
现在,按照说明设置 OTLP exporters。
Prometheus
要将你的指标(metrics)数据发送到 Prometheus,
你可以选择
启用 Prometheus 的 OTLP 接收器
并且使用 OTLP exporter,或者使用 Prometheus exporter,这是一种 MetricReader
,
他启动一个 HTTP 服务器,根据请求收集指标并将数据序列化为 Prometheus 文本格式。
后端设置
如果你已经设置了 Prometheus 或兼容 Prometheus 的后端,可以跳过本节,直接为你的应用设置 Prometheus 或者 OTLP exporter 依赖。
你可以按照以下步骤在 Docker 容器中运行 Prometheus,并通过端口 9090 访问:
创建一个名为 prometheus.yml
的文件,并将以下内容写入文件:
scrape_configs:
- job_name: dice-service
scrape_interval: 5s
static_configs:
- targets: [host.docker.internal:9464]
使用以下命令在 Docker 容器中运行 Prometheus,UI 可通过端口 9090
访问:
docker run --rm -v ${PWD}/prometheus.yml:/prometheus/prometheus.yml -p 9090:9090 prom/prometheus --enable-feature=otlp-write-receive
当使用 Prometheus 的 OTLP 接收器(Reciever)时,确保在应用中设置 OTLP 端点为
http://localhost:9090/api/v1/otlp
。
并非所有的 Docker 环境都支持 host.docker.internal
。在某些情况下,你可能需要将 host.docker.internal
替换为 localhost
或你机器的 IP 地址。
Dependencies
Install the exporter package as a dependency for your application:
pip install opentelemetry-exporter-prometheus
Update your OpenTelemetry configuration to use the exporter and to send data to your Prometheus backend:
from prometheus_client import start_http_server
from opentelemetry import metrics
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
# Service name is required for most backends
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
# Start Prometheus client
start_http_server(port=9464, addr="localhost")
# Initialize PrometheusMetricReader which pulls metrics from the SDK
# on-demand to respond to scrape requests
reader = PrometheusMetricReader()
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)
With the above you can access your metrics at http://localhost:9464/metrics. Prometheus or an OpenTelemetry Collector with the Prometheus receiver can scrape the metrics from this endpoint.
Zipkin
后端设置
如果你已经设置了 Zipkin 或兼容 Zipkin 的后端,可以跳过本节并直接为你的应用设置 Zipkin exporter 依赖。
你可以通过执行以下命令,在 Docker 容器中运行 Zipkin:
docker run --rm -d -p 9411:9411 --name zipkin openzipkin/zipkin
Dependencies
To send your trace data to Zipkin, you can choose between two different protocols to transport your data:
Install the exporter package as a dependency for your application:
pip install opentelemetry-exporter-zipkin-proto-http
pip install opentelemetry-exporter-zipkin-json
Update your OpenTelemetry configuration to use the exporter and to send data to your Zipkin backend:
from opentelemetry import trace
from opentelemetry.exporter.zipkin.proto.http import ZipkinExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
zipkin_exporter = ZipkinExporter(endpoint="http://localhost:9411/api/v2/spans")
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(zipkin_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
from opentelemetry import trace
from opentelemetry.exporter.zipkin.json import ZipkinExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
resource = Resource.create(attributes={
SERVICE_NAME: "your-service-name"
})
zipkin_exporter = ZipkinExporter(endpoint="http://localhost:9411/api/v2/spans")
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(zipkin_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
自定义导出器(Exporter)
最后,你还可以编写自己的导出器。有关更多信息,请参见 API 文档中的 SpanExporter 接口.
批量处理 Span 和日志记录
OpenTelemetry SDK 提供了一组默认的 span 和日志记录处理器,允许你选择按单条(simple)或按批量(batch)方式导出一个或多个 span。推荐使用批量处理,但如果你不想批量处理 span 或日志记录,可以使用 simple 处理器,方法如下:
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
processor = SimpleSpanProcessor(OTLPSpanExporter(endpoint="your-endpoint-here"))
Feedback
Was this page helpful?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!