Raft Logo

Raft

Principal MLOps Engineer

Posted 2 Days Ago
Be an Early Applicant
In-Office or Remote
Hiring Remotely in US
150K-200K Annually
Senior level
In-Office or Remote
Hiring Remotely in US
150K-200K Annually
Senior level
The Principal MLOps Engineer will design and deploy scalable MLOps infrastructure, manage ML workloads, and improve operational practices for AI systems in production environments while collaborating with cross-functional teams.
The summary above was generated by AI

This is a U.S. based position. All of the programs we support require U.S. citizenship to be eligible for employment. All work must be conducted within the continental U.S.

Who we are:

Raft (https://TeamRaft.com) is a customer-obsessed non-traditional defense tech company dedicated to empowering U.S. military and government agencies with cutting-edge AI/ML and data solutions. We are a leader in autonomous data fusion and Agentic AI, with a purposeful focus on Distributed Data Systems, Platforms at Scale, and Complex Application Development. With headquarters in McLean, VA, our range of clients includes innovative federal and public agencies leveraging design thinking, cutting-edge tech stack, and cloud-native ecosystem. We build digital solutions that impact the lives of millions of Americans.

We’re looking for an experienced Principal ML Ops Engineer to support our customers and join our passionate team of high-impact problem solvers.

About the role:

Raft is building mission-critical AI and data platforms for the Department of Defense (DoD). Our systems ingest and process massive volumes of real-time data from hundreds of sensors and operational sources, transform that data into usable intelligence, and deliver it to operators through mission applications and common operational pictures that support time-sensitive decision-making.

Our platform operates at scale, processing billions of events per day with low-latency data pipelines and cloud-native infrastructure. As Raft expands its AI capabilities, we are investing in a more mature end-to-end machine learning platform to support model development, evaluation, deployment, monitoring, and lifecycle management across both cloud and constrained operational environments.

In this role, you will help design, deploy, and mature Raft’s ML platform and MLOps infrastructure. You will work across Kubernetes-based deployment environments, GPU-enabled infrastructure, model serving systems, CI/CD pipelines, and secure production operations to enable rapid and reliable delivery of machine learning capabilities. This role is ideal for someone who understands both the infrastructure needed to run ML systems in production and the practical needs of ML engineers building and deploying models.

What you’ll do:
  • Design, build, and maintain secure, scalable MLOps infrastructure and deployment pipelines for production ML systems
  • Help mature Raft’s internal ML platform and model lifecycle capabilities, including model packaging, registry/catalog workflows, deployment, monitoring, and operational support
  • Deploy and manage machine learning workloads on Kubernetes, including GPU-enabled clusters
  • Support model serving and inference infrastructure for a range of ML use cases, including traditional ML, computer vision, speech/audio, and LLM-based systems
  • Build and maintain CI/CD workflows for ML services, model artifacts, and platform components
  • Partner closely with ML engineers, software engineers, and product teams to move models from experimentation to reliable operational deployment
  • Improve observability, reliability, security, and maintainability across ML infrastructure and services
  • Help evaluate and standardize runtime patterns, serving frameworks, and deployment architectures for production ML workloads
  • Contribute to infrastructure decisions across edge, on-prem, and cloud-hosted deployment environments
  • Support compliance-driven deployment practices and secure software supply chain requirements in defense environments
  • Get hands-on with customers at the most forward-leaning places in the Department of War

What we are looking for:

  • 7+ years of relevant hands-on experience in software engineering, platform engineering, DevOps, MLOps, or related technical roles
  • 5+ years of experience with Docker and Kubernetes in production environments
  • 5+ years of experience supporting enterprise cloud infrastructure or applications in AWS, Azure, or similar environments
  • Strong experience provisioning, operating, and troubleshooting Kubernetes clusters in production
  • Experience building and maintaining machine learning platforms, infrastructure, or pipelines used by engineering or data science teams
  • Practical experience deploying machine learning workloads on Kubernetes
  • Experience managing clusters or workloads that use GPUs
  • Strong understanding of Helm and Kubernetes deployment patterns
  • Strong scripting or programming skills, preferably in Python
  • Experience with modern software engineering practices including Git, CI/CD, DevOps, and Agile/Scrum workflows
  • Strong troubleshooting, systems thinking, and communication skills
  • Ability to work independently and collaboratively in a fast-moving environment
  • Ability to obtain and maintain a Top Secret clearance
  • Ability to obtain Security+ certification within the first 90 days of employment

Highly preferred:

  • Experience with ML model serving and inference platforms such as Triton Inference Server, KServe, Ray Serve, vLLM, or similar technologies
  • Experience with secure and compliant deployment practices in regulated or government environments
  • Experience with Kubernetes-based ML platforms such as Kubeflow
  • Familiarity with service mesh technologies such as Istio
  • Experience provisioning and debugging complex CI/CD systems
  • Experience with infrastructure as code tools such as Terraform
  • Familiarity with software supply chain security, container hardening, vulnerability management, and runtime scanning
  • Experience supporting ML systems across multiple deployment environments, including cloud, on-prem, and edge
  • Background working with machine learning engineers on model training, evaluation, packaging, and release workflows
  • Familiarity with storage and artifact systems used in ML platforms, such as S3-compatible object stores, registries, and metadata/catalog system
What success looks like:
  • You help Raft stand up a more mature and repeatable ML platform for deploying and managing models in production
  • ML engineers can move faster because deployment, serving, and platform workflows are clearer, more reliable, and easier to use
  • Model deployments become more secure, observable, and supportable across real-world mission environments
  • The organization gains stronger infrastructure for model lifecycle management, including deployment standards, runtime patterns, and platform ownership

Clearance Requirements:

  • Ability to obtain and maintain a Top Secret clearance 

Work Type: 

  • Remote in DMV; McLean, VA; Boston, MA; San Antonio, TX; Colorado Springs, CO; Tampa, FL; Honolulu, HI Locations ONLY
  • May require up to 40% travel

Salary Range: $150,000.00 - $200,000.00

What we will offer you:

  • Highly competitive salary
  • Fully covered healthcare, dental, and vision coverage
  • 401(k) and company match
  • Take as you need PTO + 11 paid holidays
  • Education & training benefits
  • Annual budget for your tech/gadgets needs
  • Monthly box of yummy snacks to eat while doing meaningful work
  • Remote, hybrid, and flexible work options
  • Team off-site in fun places!
  • Generous Referral Bonuses
  • And More!

Our Vision Statement: 

We bridge the gap between humans and data through radical transparency and our obsession with the mission. 

Our Customer Obsession: 

We will approach every deliverable like it's a product. We will adopt a customer-obsessed mentality. As we grow, and our footprint becomes larger, teams and employees will treat each other not only as teammates but customers. We must live the customer-obsessed mindset, always. This will help us scale and it will translate to the interactions that our Rafters have with their clients and other product teams that they integrate with. Our culture will enable our success and set us apart from other companies.

How do we get there? 

Public-sector modernization is critical for us to live in a better world. We, at Raft, want to innovate and solve complex problems. And, if we are successful, our generation and the ones that follow us will live in a delightful, efficient, and accessible world where out-of-box thinking, and collaboration is a norm. 

Raft’s core philosophy is Ubuntu: I Am, Because We are. We support our “nadi” by elevating the other Rafters. We work as a hyper collaborative team where each team member brings a unique perspective, adding value that did not exist before. People make Raft special. We celebrate each other and our cognitive and cultural diversity. We are devoted to our practice of innovation and collaboration. 

We’re an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Similar Jobs

Yesterday
Remote
US
Expert/Leader
Expert/Leader
Healthtech • Software
The Principal Engineer will lead the design and development of ML infrastructure, mentor engineers, and provide strategic technical leadership for AI-powered applications in healthcare.
Top Skills: AWSBigtableGCPGenaiKubernetesMlopsPostgresRedisSpannerVitess
2 Days Ago
In-Office or Remote
US
150K-200K Annually
Senior level
150K-200K Annually
Senior level
Artificial Intelligence • Big Data • Cloud • Cybersecurity • Defense
The Principal MLOps Engineer will design and maintain MLOps infrastructure, manage ML workloads on Kubernetes, and support model lifecycle management for production ML systems.
Top Skills: AgileAWSAzureCi/CdDevOpsDockerIstioKserveKubernetesMlopsPythonRay ServeTerraformTriton Inference Server
2 Days Ago
In-Office or Remote
US
150K-200K Annually
Senior level
150K-200K Annually
Senior level
Artificial Intelligence • Big Data • Cloud • Cybersecurity • Defense
As a Principal MLOps Engineer, you will design, deploy, and maintain MLOps infrastructure for AI applications, ensuring efficient model deployment and management in production environments.
Top Skills: AWSAzureCi/CdDockerKubeflowKubernetesMlopsPythonTensorFlowTerraformTriton Inference Server

What you need to know about the Austin Tech Scene

Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.

Key Facts About Austin Tech

  • Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
  • Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
  • Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
  • Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account