Who is Element?
We serve as a partner at the intersection of innovation and our clients' needs, efficiently crafting meaningful user experiences for government and commercial customers. By breaking down complex problems to their fundamental elements, we create modern digital solutions that drive efficiencies, maximize taxpayer dollars, and deliver essential outcomes that serve the people.
Why Work at Element?
Make an impact that resonates-join our vibrant team and discover how you can improve lives through digital transformation. Our talented professionals bring unparalleled energy engagement, setting a higher standard for impactful work. Come be a part of our team and shape a better future.
Position Summary:
We are seeking a driven Data Engineer to support a federal government contract focused on designing, building, and maintaining scalable data pipelines for enterprise-level data integration. The Data Engineer will be responsible for ensuring reliable, high-quality data flow across multiple systems while enabling analytics, reporting, and downstream application use cases.
This role requires strong experience in distributed data systems, real-time and batch processing, and working in secure, regulated government environments.
Key Responsibilities
- Build and maintain high-volume, scalable data pipelines using Apache Kafka and Apache Spark, supporting both real-time and batch data processing needs.
- Design, develop, and optimize data ingestion, transformation, and integration workflows across enterprise systems.
- Ensure data quality, consistency, and integrity across four (4) disparate data sources, implementing validation, cleansing, and reconciliation processes.
- Develop and maintain SQL-based data solutions, including complex queries, stored procedures, performance tuning, and data modeling.
- Collaborate with data analysts, product owners, and application teams to define data requirements and ensure alignment with business needs.
- Implement monitoring, logging, and alerting mechanisms to ensure reliability and observability of data pipelines.
- Support data architecture design and contribute to best practices for scalable and secure data engineering solutions.
- Ensure compliance with federal data governance, security, and privacy requirements.
- Participate in Agile ceremonies and support iterative development and delivery of data capabilities.
- Troubleshoot and resolve data pipeline issues, ensuring minimal disruption to downstream systems and reporting.
Minimum Requirements
- Bachelor’s degree in Computer Science, Information Systems, Engineering, Data Science, or related field (or equivalent experience).
- 3+ years of experience in data engineering, data integration, or related technical roles.
- Strong hands-on experience with Apache Kafka for streaming data pipelines.
- Strong experience with Apache Spark for large-scale data processing (batch and/or streaming).
- Advanced SQL development experience, including complex queries, performance tuning, and data transformation logic.
- Experience integrating and managing data across multiple heterogeneous data sources.
- Experience working in the federal government or other highly regulated environments with security and compliance requirements.
- Strong understanding of data quality management, data validation, and data governance practices.
- Strong problem-solving and analytical thinking abilities.
- Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders.
- Strong attention to detail, especially in ensuring data accuracy and consistency.
- Ability to work independently in a fast-paced, mission-driven environment.
- Strong collaboration skills across cross-functional technical and business teams.
- US Citizenship or Permanent Residency required.
- Must reside in the Continental US.
- Depending on the government agency, specific requirements may include public trust background check or security clearance.
Preferred Qualifications
- AWS Data Engineer certification.
- Experience with cloud platforms such as AWS, Azure, or GCP (especially data services like S3, Glue, Databricks, or BigQuery).
- Familiarity with data orchestration tools (e.g., Airflow, NiFi, or similar).
- Experience supporting healthcare, insurance, or CMS-related data environments.
- Knowledge of data modeling techniques (dimensional modeling, star/snowflake schemas).
- Experience with DevOps practices, CI/CD pipelines, and infrastructure-as-code for data systems.
- Familiarity with real-time analytics and event-driven architectures.
Similar Jobs
What you need to know about the Austin Tech Scene
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center


