NVIDIA sits at the center of the AI revolution, and the teams behind our data and observability platforms keep the whole engine running! We’re hiring Site Reliability Engineers who want to work on the systems that power everything from large-scale data pipelines to model training clusters to real-time decision making. This isn’t a typical SRE role, you’ll help design and run NVIDIA’s global telemetry backbone, the platform that carries metrics, logs, traces, and profiling data for some of the most demanding workloads in the world. You’ll shape how our AI and data systems are built, set reliability standards, and solve scaling challenges that come with operating at NVIDIA’s pace and scale.
If you enjoy diving into distributed systems, building automation that eliminates toil, and partnering with infra and application teams to raise the reliability bar, this is a place where your work will have real impact. And you’ll be joining a group that values curiosity, learning, and blameless engineering, giving you room to grow while working on problems that matter.
What you’ll be doing:
Architecting and operating large-scale observability systems that span global regions and support AI, data, and platform services.
Designing resilient pipelines for metrics, logs, traces, profiling, and events that keep critical systems visible and debuggable.
Working closely with platform, infrastructure, and application teams to establish telemetry standards, instrumentation patterns, and integration workflows.
Automating deployments, scaling workflows, and maintenance tasks to cut down toil and level up operational maturity across the stack.
Defining and maintaining SLOs, SLIs, error budgets, dashboards, and alerting models that guide reliability decisions company-wide.
Building self-service tooling and frameworks that make observability easy to adopt for engineers across NVIDIA.
Studying real system behavior to uncover bottlenecks, scaling limits, failure modes, and long-term architecture risks.
Running day-to-day operations including upgrades, performance tuning, break/fix, and rotations that keep the platform healthy.
Leading incident response and root-cause investigations, then driving the follow-through to eliminate repeat failures.
Guiding engineers through design reviews, operational best practices, and reliability-focused decision making.
What we need to see:
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience.
10+ years operating large-scale production systems in roles such as SRE, Production Engineer, or Platform Engineer and 5+ years designing, building, and running observability platforms at scale.
Deep hands-on experience with open-source observability stacks, including Prometheus/Thanos/Mimir for metrics, Loki or Elasticsearch/OpenSearch for logs, and Tempo/Jaeger/OpenTelemetry for tracing and profiling.
Strong programming ability in Python and Go, with Java experience considered a plus.
Solid grounding in Linux internals, networking, storage systems, distributed systems, concurrency, and performance engineering.
Experience architecting multi-region, multi-tenant telemetry pipelines with high availability and strong durability guarantees.
Proven skill in optimizing PromQL, LogQL, trace queries, ingestion paths, indexing strategies, and retention policies.
Strong understanding of SLOs, SLIs, error budgets, incident response, and the operational processes that support reliable systems.
Ability to analyze complex distributed systems, pinpoint failure modes, and drive data-informed debugging and root cause analysis.
Clear communicator who can collaborate effectively across product, platform, infrastructure, and application engineering teams.
Ways to stand out from the crowd:
Designed or led the architecture of a global observability platform supporting thousands of services with strict reliability and performance requirements.
Contributed meaningfully to OpenTelemetry, Prometheus, Grafana, or other major observability open-source projects.
Built high-throughput ingestion pipelines and long-term storage systems, with a strong focus on cost efficiency, retention strategy, and query performance.
Specialized in high-cardinality telemetry, multi-tenant isolation, and advanced retention or tiered storage models.
Worked hands-on with Kafka, Spark, Flink, or large-scale collectors in ultra-high-scale production environments where observability is mission critical.
NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.You will also be eligible for equity and benefits.
Top Skills
Similar Jobs
What you need to know about the Austin Tech Scene
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center


