Unveiling OpenTelemetry: Your Key to Streamlined Observability

This blog dives into OpenTelemetry (OTel), an open-source framework that simplifies how you collect and analyze data about your applications.

Author

Priyanshu Singh
Priyanshu SinghSoftware Engineer III

Date

Jun 6, 2024

Table of Contents

What is OpenTelemetry?

“OpenTelemetry is an open-source project under the Cloud Native Computing Foundation (CNCF) that aims to standardize the collection and export of telemetry data, including traces, metrics, and logs, from distributed systems and microservices architectures. Originally formed by the merger of the OpenTracing and OpenCensus projects, OpenTelemetry provides a unified framework for instrumenting applications, libraries, and infrastructure components to gather telemetry data and send it to backend systems for analysis and visualization.”

OpenTelemetry is an open-source project that helps developers monitor and understand their software's performance and behavior. It provides a set of tools, APIs, and SDKs (Software Development Kits) that make it easier to collect, process, and export telemetry data such as traces, metrics, and logs from your applications.

Benefits of OpenTelemetry

OpenTelemetry offers several advantages for organisations looking to streamline their observability efforts:

  • Vendor Neutrality: No more being locked into a single vendor! OTel lets you collect data from various sources and send it to different platforms, offering flexibility in your monitoring setup.
  • Data Flexibility: You control what data gets sent. OTel allows you to filter and customize the telemetry you collect, ensuring you capture only the information you need for optimal performance analysis.
  • Extensibility: OpenTelemetry supports a wide range of programming languages and frameworks, making it easy to integrate with your existing applications and infrastructure.

How Does OpenTelemetry Work?

OpenTelemetry works by providing a unified and standardised approach to collect and process telemetry data from your applications. It begins with instrumentation, where developers either manually add code using OpenTelemetry APIs or utilize auto-instrumentation libraries to automatically gather data such as traces, metrics, and logs. This data captures crucial information about the application's performance and behavior. OpenTelemetry ensures context propagation, maintaining the continuity of request data across various components and services in a distributed system.

The collected data is then processed and exported using processors and exporters, which send it to chosen observability platforms like Prometheus or Jaeger.

Once the data reaches these backends, it can be analyzed and visualized through dashboards and alerts, helping developers monitor system health, diagnose issues, and optimize performance. By standardizing telemetry collection and processing, OpenTelemetry simplifies observability and enhances the ability to maintain and improve complex software systems.

OpenTelemetry

In short, OpenTelemetry empowers you with a standardised and efficient way to monitor your applications, leading to a clearer understanding of your system's overall health and performance

Let us see a example of OpenTelemetry with Jaeger and Prometheus along with NestJS.

While OpenTelemetry provides a standardized way to collect data, specific tools excel in analyzing different aspects of your system's health. Here is a quick introduction to two popular options:

  • Jaeger: This open-source tool focuses on distributed tracing. It maps the journey of a user request across various microservices in your system. This helps pinpoint performance bottlenecks and identify where requests might be slowing down.
  • Prometheus: This tool acts as a metrics monitoring and alerting system. It collects and analyzes time-series data, such as CPU usage, memory consumption, or request latency. Prometheus helps you identify trends and potential issues by providing real-time insights into your system's resource utilization.

Setting Up OpenTelemetry in a NestJS Project with Docker (Step-by-Step Instructions)

Step 1: Create a New NestJS Project

First, create a new NestJS project using the Nest CLI:

Navigate into your newly created project directory:

Step 2: Update app.controller.ts

Modify the app.controller.ts file to add a new endpoint. This is what your file should look like:

Step 3: Create the Configuration Files

Navigate to the src directory and create a config folder:

Inside the config folder, create the following files:

opentelemetry.ts

Add the following content to opentelemetry.ts:

Create otel-collector-config.yaml:

Add the following content to otel-collector-config.yaml:

Create prometheus.yaml:

Add the following content to prometheus.yml:

Step 4: Create Docker Configuration Files

In the root directory of your project, create the Docker configuration files.

Dockerfile'

Add the following content to Dockerfile:

Create docker-compose.yml:

Add the following content to docker-compose.yml:

Step 5: Create the .env File

In the root directory, create a .env file:

Add the following content to .env:

Final Project Structure

After following the above steps, your project structure should look like this:

Step 6: Run the Services with Docker Compose

To start all the services defined in your docker-compose.yml file, use the following command:

Ports and Services

Now that your services are up and running, it is time to see them in action.

Step 1: Hit the Test Endpoint

Open your browser and navigate to http://localhost:3000/test. Refresh the page a few times to generate some traffic.

Step 2: Explore Jaeger UI

Jaeger is a tool for monitoring and troubleshooting microservices-based distributed systems. It will help you visualize the traces collected by OpenTelemetry.

  • Jaeger UI: Open your browser and go to http://localhost:16686.
  • In the Jaeger UI, you can search for traces of your requests. Use the demo-service-backend as the service name to filter the traces.
  • You will see a detailed view of each trace, showing how your request flowed through the application.

Screenshot 2024-05-28 at 10.19.02 PM.png

Step 3: Explore Prometheus

Prometheus is an open-source system monitoring and alerting toolkit. It collects metrics, stores them, and allows you to query them.

  • Prometheus: Open your browser and go to http://localhost:9090.
  • In the Prometheus UI, you can explore the metrics being collected. Use the Graph tab to visualize these metrics over time.
  • You can query metrics such as otelcol_process_cpu_seconds, which shows the number of spans sent by the OpenTelemetry collector.

Screenshot 2024-05-28 at 10.20.51 PM.png
Hire Us Page

Conclusion

By hitting the test endpoint and exploring the Jaeger and Prometheus UIs, you can see the powerful observability tools in action. Jaeger helps you trace the path of requests through your microservices, while Prometheus provides insights into your system's metrics. This setup ensures you have the visibility needed to monitor and troubleshoot your application effectively.

SHARE ON

Related Articles.

More from the engineering frontline.

Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

How We Built an AI System That Automates Senior Solution Architect Workflows
Article

Apr 6, 2026

How We Built an AI System That Automates Senior Solution Architect Workflows

Discover how we built a 4-agent AI co-pilot that converts complex RFPs into draft technical proposals in 15 minutes — with built-in conflict detection, assumption surfacing, and confidence scoring.

AI Code Healer for Fixing Broken CI/CD Builds Fast
Article

Apr 6, 2026

AI Code Healer for Fixing Broken CI/CD Builds Fast

A deep dive into how GeekyAnts built an AI-powered Code Healer that analyzes CI/CD failures, summarizes logs, and generates code-level fixes to keep development moving.

A Real-Time AI Fraud Decision Engine Under 50ms
Article

Apr 2, 2026

A Real-Time AI Fraud Decision Engine Under 50ms

A deep dive into how GeekyAnts built a real-time AI fraud detection system that evaluates transactions in milliseconds using a hybrid multi-agent approach.

Building an Autonomous Multi-Agent Fraud Detection System in Under 200ms
Article

Apr 1, 2026

Building an Autonomous Multi-Agent Fraud Detection System in Under 200ms

GeekyAnts built a 5-agent fraud detection pipeline that makes decisions in under 200ms — 15x cheaper than single-model systems, with full explainability built in.

Building a Self-Healing CI/CD System with an AI Agent
Article

Mar 31, 2026

Building a Self-Healing CI/CD System with an AI Agent

When code breaks a pipeline, developers have to stop working and figure out why. This blog shows how an AI agent reads the error, finds the fix, and submits it for review all on its own.

Maestro Automation Framework — Advanced to Expert
Article

Mar 26, 2026

Maestro Automation Framework — Advanced to Expert

Master Maestro at scale. Learn architecture, reusable flows, CI/CD optimization, and how to eliminate flakiness in production-grade mobile automation.Master Maestro at scale. Learn architecture, reusable flows, CI/CD optimization, and how to eliminate flakiness in production-grade mobile automation.

Scroll for more
View all articles