About the Role
Are you ready to build the data backbone that enables trusted, real‑time insight across complex mission environments? As an Integration Engineer, you will design and implement data integration solutions that securely and reliably move data across diverse domains while ensuring data consistency, quality, and governance. You’ll build streaming and batch pipelines to support operational intelligence, cross‑domain analytics, and enterprise data products - often in environments with strict security, segmentation, and governance requirements. Working closely with domain experts and platform teams, you’ll help ensure data is consistent, high‑quality, and available where decisions are made, even across distributed and constrained architectures.
Responsibilities
- Design and implement integration patterns that enable seamless data flow across multiple business domains.
- Navigate domain-specific security models, network segmentation, and data sovereignty requirements.
- Implement cross-domain service architectures using APIs, event streaming, and data virtualization to decouple source and consumer domains.
- Collaborate with domain data stewards to define service-level agreements (SLAs), data contracts, and handshake protocols between domains.
- Build scalable data acquisition pipelines from diverse sources.
- Implement change data capture (CDC) using Debezium, AWS DMS, or similar tools for database sources.
- Develop resilient ingestion frameworks that handle variable data volumes, network latency, and source system unreliability.
- Architect, deploy, and manage Apache Kafka clusters across multiple domains or environments (on-premise, cloud, hybrid).
- Implement Kafka Streams or ksqlDB for real-time data enrichment and transformation.
- Design canonical data models that serve as the lingua franca for cross-domain data exchange.
- Collaborate with domain experts to align business definitions, hierarchies, and metrics across functions.
- Implement cross-domain security controls including encryption in transit, encryption at rest, and fine-grained access controls (RBAC, ABAC).
- Ensure compliance with regulatory requirements (GDPR, CCPA, SOX, etc.) across domain boundaries.
- Set up alerting for pipeline failures, data latency, schema drift, and cross-domain connectivity issues.
TAG: #LI-I4DM
Required Qualifications:
- 5+ years of experience in data engineering, data integration, or software engineering with a focus on enterprise-scale environments.
- Proven experience designing and operating cross-domain data integration architectures in large enterprises.
- Experience navigating network segmentation, firewall policies, and security zones in hybrid or multi-cloud environments.
- Production experience with Apache Kafka, including Kafka cluster administration (brokers, topics, partitions, replication, consumer groups)
- Experience with managed Kafka services: Confluent Cloud, Amazon MSK, Azure Event Hubs, or similar.
- Experience with cross-cluster replication, disaster recovery, and multi-region Kafka architectures.
- Proven experience acquiring data from:
- Enterprise applications (SAP, Oracle EBS, JD Edwards, Salesforce)
- APIs (REST, GraphQL, SOAP) with advanced handling of rate limits, pagination, and authentication
- Databases via CDC (Debezium, Oracle GoldenGate, AWS DMS)
- Experience with edge data acquisition and IoT platforms (AWS IoT Core, Azure IoT Hub).
- Deep experience with enterprise data modeling across multiple domains.
- Proficiency with data modeling tools (ERwin, ER/Studio, SAP PowerDesigner, or open-source alternatives).
- Advanced proficiency for custom integration development, Kafka producers/consumers, and automation.
- Experience with Kafka client libraries and stream processing applications.
- Expert-level for data validation, reconciliation, and complex transformations.
- Deep experience with AWS (MSK, ECS, Lambda, S3, IAM, VPC) or Azure (Event Hubs, Data Factory, Synapse, Databricks).
- Docker, Kubernetes, Helm for deploying streaming applications.
- Experience with cloud data warehouses (Snowflake, BigQuery, Redshift).
- Git and CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins).
- Terraform, AWS CloudFormation, or Azure Resource Manager.
Top Skills
Similar Jobs
What you need to know about the Austin Tech Scene
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

