Calix Logo

Calix

Staff Software Engineer - Cloud Platform (Kafka)

Reposted Yesterday
Remote
Hiring Remotely in USA
136K-266K Annually
Senior level
Remote
Hiring Remotely in USA
136K-266K Annually
Senior level
Design and implement cloud infrastructure and data pipelines using GCP services, optimize performance, and collaborate with teams for seamless integration and automation.
The summary above was generated by AI
The Calix platform enables Communication Service Providers (CSPs) of all sizes to transform and future-proof their businesses. Through real-time data, automation, and actionable insights delivered via Calix One — our cloud-first, AI-powered platform — CSPs can simplify operations, collapse cost, and accelerate innovation. Calix One brings together the automation of everything and the experience of one, empowering customers to deliver differentiated subscriber experiences while driving acquisition, loyalty, and revenue growth. This is the Calix mission: to enable CSPs of all sizes to simplify, innovate, and grow, strengthening both their businesses and the communities they serve.
We’re at the forefront of a once in a generational change in the broadband industry. Join us as we innovate, help our customers reach their potential, and connect underserved communities with unrivaled digital experiences.

This is a remote based position in US.  Please note that as part of the recruitment and hiring process, there is an in-person meeting that will take place.

We are seeking a skilled and experienced Staff Cloud Platform Engineer with expertise in Kafka to join Cloud Platform team. The Staff Cloud Platform Engineer to design, deploy, operate, and optimize our Apache Kafka-based event streaming infrastructure at scale to design in Google Cloud Platform (GCP).The ideal candidate will have a strong background in DevOps practices, cloud infrastructure automation, and big data technologies. In this role you will partner closely with platform, data, and application engineering teams to ensure our Kafka clusters are reliable, performant, and secure — running natively on GCP or AWS.

Responsibilities:

  • Design, provision, and manage Apache Kafka clusters (self-managed on GCP/AWS or via Confluent Platform / MSK).

  • Configure and tune brokers, ZooKeeper/KRaft, topics, partitions, replication factors, and retention policies for high throughput and low latency.

  • Perform cluster upgrades, rolling restarts, and broker replacements with zero downtime.

  • Implement and manage Kafka Connect pipelines for data ingestion and egress across heterogeneous systems.

  • Administer Kafka Streams and ksqlDB deployments for real-time stream processing workloads.

  • Maintain Schema Registry and enforce schema governance standards across teams.

  • Define and track SLIs/SLOs for consumer lag, throughput, end-to-end latency, and broker health.

  • Design and implement cloud infrastructure using IaC – Terraform

  • Build automated deployment pipelines for Kafka configuration changes using GitOps workflows (ArgoCD, Flux).

  • Create self-service tooling and runbooks to reduce toil for development teams.

  • Automate topic provisioning, ACL management, and schema registration via APIs and CLI tooling.

  • Integrate tools like GitLab CI/CD, or Cloud Build for automated testing and deployment.

  • Ensure seamless integration of data pipelines with other GCP services like Big Query, Cloud Storage.

  • Monitor and Optimize performance, reliability, and cost of Kafka and streaming pipelines

  • Implement security best practices for GCP resources, including IAM policies, encryption, and network security.

  • Ensure Observability is an integral part of the infrastructure platforms and provides adequate visibility about their health, utilization, and cost.

  • Collaborate extensively with cross functional teams to understand their requirements; educate them through documentation/trainings and improve the adoption of the platforms/tools.

Qualifications:

  • 10+ years of overall experience in DevOps cloud engineering, or data engineering.

  • 5+ years of experience in Kafka at production scale.

  • Deep expertise in Kafka internals: replication protocol, log compaction, consumer group coordination, partition leadership, and KRaft mode

  • Proficiency with container orchestration (Kubernetes / Helm) and deploying Kafka via Strimzi, Confluent Operator, or equivalent

  • Strong understanding of networking (VPC, peering, private endpoints, DNS, load balancing) in cloud environments.

  • Hands-on experience with Kafka Connect, Schema Registry, and at least one stream processing framework (Kafka Streams, Flink, Spark Structured Streaming).

  • Proficiency in Google Cloud Platform (GCP) services, including Dataflow, Pub/Sub, Kafka, Dataproc, Big Query, and Cloud Storage.

  • Expertise in Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager.

  • Familiarity with data orchestration tools like Apache Airflow or Cloud Composer.

  • Experience with CI/CD tools like Jenkins, GitLab CI/CD, or Cloud Build.

  • Knowledge of containerization and orchestration tools like Docker and Kubernetes.

  • Strong scripting skills for automation (e.g., Bash, Python).

  • Experience with monitoring tools like Cloud Monitoring, Prometheus, and Grafana.

  • Familiarity with logging tools like Cloud Logging or ELK Stack.

  • Strong problem-solving and analytical skills.

  • Excellent communication and collaboration abilities.

  • Ability to work in a fast-paced, agile environment.

#LI-Remote

The base pay range for this position varies based on the geographic location. More information about the pay range specific to candidate location and other factors will be shared during the recruitment process. Individual pay is determined based on location of residence and multiple factors, including job-related knowledge, skills and experience.

San Francisco Bay Area:

156,400 - 265,700 USD Annual

All Other US Locations:

136,000 - 231,000 USD Annual

As a part of the total compensation package, this role may be eligible for a bonus. For information on our benefits click here.

Top Skills

Apache Airflow
Apache Kafka
Bash
Big Query
Cloud Build
Cloud Logging
Cloud Monitoring
Cloud Storage
Dataflow
Dataproc
Datastream
Docker
Elk Stack
Gitlab Ci/Cd
Google Cloud Platform (Gcp)
Grafana
Kubernetes
Prometheus
Pub/Sub
Python
Terraform

Similar Jobs

4 Minutes Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
Mid level
Mid level
Artificial Intelligence • Cloud • Information Technology • Security • Social Impact • Software
The Technical Support Engineer acts as a key resource for customer support, troubleshooting complex issues, and ensuring a seamless customer experience throughout the onboarding and support lifecycle.
Top Skills: Amazon Web Services (Aws)BashJavaScriptPythonRest ApisSQL
9 Minutes Ago
In-Office or Remote
113K-193K Annually
Senior level
113K-193K Annually
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
The Lead Digital Product Manager will direct strategies for the Advocate Experience, enhancing operational processes and digital tools to improve consumer interactions. Responsibilities include product planning, data-driven decision-making, and managing product lifecycle from conception to delivery, all aimed at optimizing healthcare experiences.
Top Skills: Agile MethodologiesAICRM
9 Minutes Ago
In-Office or Remote
113K-193K Annually
Senior level
113K-193K Annually
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
The Senior Delivery Manager leads enterprise delivery, aligning performance systems and workforce strategy, driving organizational improvements, and enabling Delivery Managers without directly managing personnel. They focus on metrics, transparency, and strategy in technology development.
Top Skills: Agile MethodologiesData AnalysisPerformance Management SystemsWorkforce Management Tools

What you need to know about the Austin Tech Scene

Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.

Key Facts About Austin Tech

  • Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
  • Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
  • Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
  • Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account