Element (e14s.com) Logo

Element (e14s.com)

Data Engineer

Posted Yesterday
Remote
Hiring Remotely in United States
135K-155K Annually
Mid level
Remote
Hiring Remotely in United States
135K-155K Annually
Mid level
The Data Engineer will build and maintain scalable data pipelines, ensure data quality, and collaborate with teams to meet data requirements for federal government projects.
The summary above was generated by AI

Who is Element?

We serve as a partner at the intersection of innovation and our clients' needs, efficiently crafting meaningful user experiences for government and commercial customers. By breaking down complex problems to their fundamental elements, we create modern digital solutions that drive efficiencies, maximize taxpayer dollars, and deliver essential outcomes that serve the people. 

Why Work at Element?

Make an impact that resonates-join our vibrant team and discover how you can improve lives through digital transformation.  Our talented professionals bring unparalleled energy engagement, setting a higher standard for impactful work. Come be a part of our team and shape a better future. 

Position Summary:

We are seeking a driven Data Engineer to support a federal government contract focused on designing, building, and maintaining scalable data pipelines for enterprise-level data integration. The Data Engineer will be responsible for ensuring reliable, high-quality data flow across multiple systems while enabling analytics, reporting, and downstream application use cases.

This role requires strong experience in distributed data systems, real-time and batch processing, and working in secure, regulated government environments.

Key Responsibilities

  • Build and maintain high-volume, scalable data pipelines using Apache Kafka and Apache Spark, supporting both real-time and batch data processing needs.
  • Design, develop, and optimize data ingestion, transformation, and integration workflows across enterprise systems.
  • Ensure data quality, consistency, and integrity across four (4) disparate data sources, implementing validation, cleansing, and reconciliation processes.
  • Develop and maintain SQL-based data solutions, including complex queries, stored procedures, performance tuning, and data modeling.
  • Collaborate with data analysts, product owners, and application teams to define data requirements and ensure alignment with business needs.
  • Implement monitoring, logging, and alerting mechanisms to ensure reliability and observability of data pipelines.
  • Support data architecture design and contribute to best practices for scalable and secure data engineering solutions.
  • Ensure compliance with federal data governance, security, and privacy requirements.
  • Participate in Agile ceremonies and support iterative development and delivery of data capabilities.
  • Troubleshoot and resolve data pipeline issues, ensuring minimal disruption to downstream systems and reporting. 

Minimum Requirements

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, Data Science, or related field (or equivalent experience).
  • 3+ years of experience in data engineering, data integration, or related technical roles.
  • Strong hands-on experience with Apache Kafka for streaming data pipelines.
  • Strong experience with Apache Spark for large-scale data processing (batch and/or streaming).
  • Advanced SQL development experience, including complex queries, performance tuning, and data transformation logic.
  • Experience integrating and managing data across multiple heterogeneous data sources.
  • Experience working in the federal government or other highly regulated environments with security and compliance requirements.
  • Strong understanding of data quality management, data validation, and data governance practices.
  • Strong problem-solving and analytical thinking abilities.
  • Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders.
  • Strong attention to detail, especially in ensuring data accuracy and consistency.
  • Ability to work independently in a fast-paced, mission-driven environment.
  • Strong collaboration skills across cross-functional technical and business teams.
  • US Citizenship or Permanent Residency required. 
  • Must reside in the Continental US.
  • Depending on the government agency, specific requirements may include public trust background check or security clearance.

Preferred Qualifications

  • AWS Data Engineer certification.
  • Experience with cloud platforms such as AWS, Azure, or GCP (especially data services like S3, Glue, Databricks, or BigQuery).
  • Familiarity with data orchestration tools (e.g., Airflow, NiFi, or similar).
  • Experience supporting healthcare, insurance, or CMS-related data environments.
  • Knowledge of data modeling techniques (dimensional modeling, star/snowflake schemas).
  • Experience with DevOps practices, CI/CD pipelines, and infrastructure-as-code for data systems.
  • Familiarity with real-time analytics and event-driven architectures.

Similar Jobs

2 Days Ago
Remote or Hybrid
CA, USA
85K-120K Annually
Mid level
85K-120K Annually
Mid level
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
As a Data Engineer, you'll lead data engineering projects, develop data transformations, build workflows with Apache Airflow, and ensure data quality and integrity while collaborating with stakeholders.
Top Skills: AirflowCi/CdDbtGitPythonRedshiftSnowflake
5 Days Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
102K-154K Annually
Mid level
102K-154K Annually
Mid level
Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
As a Data Engineer II, you will build, scale, and optimize data platforms, focusing on ETL/ELT pipelines, and collaborate with data scientists and AI engineers to support data-driven applications and insights.
Top Skills: DatabricksDbtPythonRdsRedshiftSnowflakeSparkSQL
9 Days Ago
Remote or Hybrid
79K-135K Annually
Senior level
79K-135K Annually
Senior level
Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Design and implement ETL pipelines, optimize data systems, build data warehouses, and develop AI/ML models for data solutions. Collaborate with teams to deliver data-driven products and ensure data quality and governance.
Top Skills: Apache AirflowApache KafkaAws GlueAws RedshiftFivetranJavaOraclePower BIPythonPyTorchScalaScikit-LearnSQLSQL ServerTableauTensorFlow

What you need to know about the Austin Tech Scene

Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.

Key Facts About Austin Tech

  • Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
  • Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
  • Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
  • Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account