NVIDIA Logo

NVIDIA

Senior Deep Learning Framework Communications Engineer

Reposted 11 Days Ago
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Austin, TX, USA
152K-288K Annually
Senior level
In-Office or Remote
Hiring Remotely in Austin, TX, USA
152K-288K Annually
Senior level
You will integrate communication libraries into AI frameworks, analyze workloads, improve AI compilers, and author custom kernels for performance.
The summary above was generated by AI

NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars.

We are looking for a motivated Deep Learning engineer to bring advanced communication technologies into AI stacks, including PyTorch, TRT-LLM, vLLM, SGLang, JAX, etc. You will be working with the team that created communication libraries like NCCL, NVSHMEM & technology like GPUDirect -- for scaling Deep Learning and HPC applications. Your customers will have diverse multi-GPU demands, ranging from training on scales up to 100K GPUs to inference down at microsecond latency. Communication performance between the GPUs has a direct impact on AI applications. Your work in AI toolkits will make all of those easier for the community. This is an outstanding opportunity for someone with an AI background to advance the state of the art in this space. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?

What you will be doing:

  • Integrate new communication libraries features in AI frameworks: from PoC to performance analysis to production

  • Perform deep analysis of AI workloads and frameworks to identify multi-GPU communication requirements and opportunities. Collaborate hands-on with teams working on the latest AI models.

  • Improve AI compilers to hide communications or perform automatic fusion.

  • Conduct in-depth AI workload performance characterization on multi-GPU clusters.

  • Design fault-tolerant and elastic solutions for large-scale or dynamic AI workloads.

  • Author custom communication or fused compute-communication kernels to showcase ultimate performance on NV platforms.

  • Influence the roadmap of communication libraries - NCCL & NVSHMEM.

  • Collaborate with a very dynamic team across multiple time zones.

What we need to see:

  • B.S, M.S. or PHD in Computer Science, or related field (or equivalent experience) with 5+ software engineering and HPC/AI experience

  • Development or integration experience with Deep Learning Frameworks such PyTorch, JAX, and Inference Engines such as TRT-LLM, vLLM, SGLang

  • Rapid prototyping and development with Python, C++, CUDA or related DSLs (Triton, cuTe)

  • Solid grasp of AI models, parallelisms, and/or compiler technologies (e.g. torch.compile)

  • Experience conducting performance benchmarking on AI clusters. Familiarity with at least one performance profiler toolchain (PyTorch profiler, NVIDIA Nsight Systems)

  • Understanding of HPC/AI communication concepts (1-sided v 2-sided communication, elasticity, resiliency, topology discovery, etc)

  • Adaptability and passion to learn new areas and tools

  • Flexibility to work and communicate effectively across different teams and timezones

Ways to stand out from the crowd:

  • Experience with parallel programming on at least one communication runtime (NCCL, NVSHMEM, MPI). Good understanding of computer system architecture, HW-SW interactions and operating systems principles (aka systems software fundamentals)

  • Expertise in one or more of these areas: Training, Distributed inference, MoE, Reinforcement Learning, kernel authoring (on CUDA, Triton, cuTe, etc). Experience with programming for compute & communication overlap in distributed runtimes

  • Experience with AI compiler pattern matching and lowering. Solid understanding of memory hierarchy, consistency model, and tensor layout

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 13, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Similar Jobs

4 Hours Ago
Remote or Hybrid
Austin, TX, USA
147K-278K Annually
Senior level
147K-278K Annually
Senior level
Cloud • Software
This role involves leading a Site Reliability Engineering team, managing FedRAMP-compliant infrastructure, collaborating across teams, and ensuring operational excellence and security.
Top Skills: ArchitectureAutomationCloudCybersecurityFedrampIncident ResponseMonitoringMulti-Tiered Architecture
4 Hours Ago
In-Office or Remote
Austin, TX, USA
43K-57K Annually
Mid level
43K-57K Annually
Mid level
Big Data • Information Technology • Software • Analytics • Energy
The Owner Relations Agent handles owner inquiries regarding revenue and land issues, responds to client needs, and manages relationships through effective communication and follow-up.
Top Skills: MS OfficeVarious Oil And Gas Software Programs
4 Hours Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
151K-297K Annually
Expert/Leader
151K-297K Annually
Expert/Leader
Big Data • Cloud • Software • Database
Join MongoDB's Query Execution team to improve the database's core execution engine, build high-performance query features, and mentor team members. You will drive roadmap initiatives and ensure production-ready code for complex analytical workloads.
Top Skills: C++

What you need to know about the Austin Tech Scene

Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.

Key Facts About Austin Tech

  • Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
  • Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
  • Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
  • Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account