The Data Engineer will enhance the Enterprise Data Warehouse by modernizing ETL processes, ensuring data integrity, troubleshooting production issues, collaborating with teams, and contributing to technical governance.
Job Summary & Responsibilities
POSITION OVERVIEW:
We are seeking a technically skilled and experienced Data Engineer to provide support and enhancement of our Enterprise Data Warehouse. The role focuses on modernizing ETL processes within an on-premises Cloudera Data Platform (CDP) environment, leveraging technologies like Apache Spark, Apache Iceberg, and Apache Airflow for scalable, efficient, and reliable data transformation and management. The ideal candidate will have strong ETL development and troubleshooting skills, along with experience participating in production support environments.
Essential job functions:
- Development
- Contribute development efforts for ETL pipelines in the Enterprise Data Warehouse (EDW)
- Support and rebuild legacy ETL jobs (currently not using ACID transactions) with modern solutions using Apache Spark and Apache Iceberg to support ACID transactions.
- Transform and integrate EBCDIC Mainframe data into Hive and Impala tables using Precisely Connect for Big Data.
- Optimize data transformation processes for performance, scalability, and reliability.
- Ensure data consistency, accuracy, and quality across the ETL pipelines.
- Utilizes best practices for ETL code development, version control, and deployment using Azure DevOps.
- Production Support
- Shares weekly 24/7 production support with managed service vendor on a 4-week rotation.
- Monitor ETL workflows and troubleshoot issues to ensure smooth production operations.
- Research and resolve user requests and issues
- Collaboration and Stakeholder Engagement
- Collaborate with cross-functional teams, including data engineers, business analysts, administrators, and quality analyst engineers to ensure alignment on requirements and deliverables.
- Engage with business stakeholders to understand data requirements and translate them into scalable technical solutions.
- Technical Governance
- Contribute to process documentation, and follow best practices within the Enterprise Data Warehouse
- Follow proper SDLC protocols within Azure DevOps code repository
- Stay updated on emerging technologies and trends to continuously improve data platform capabilities.
- Other tasks as assigned by management
MINIMUM REQUIREMENTS:
- Bachelor’s degree in IT or similar field. (Additional equivalent experience above the required minimum may be substituted for the degree requirement.)
- 3+ years of experience in ETL development and data engineering roles
- 3+ years of advanced SQL experience
- 3+ years in Python and Linux for Spark-based development.
- Proven experience in using Apache Spark or Apache Iceberg or Airflow for ETL pipelines.
- Strong familiarity with version control systems, especially Azure DevOps.
- Knowledge of data governance and security best practices in a distributed data environment.
- Familiarity with data modeling, schema design, and building data models for reporting needs.
- In-depth understanding of ETL frameworks, ACID transactions, change data capture, and distributed computing.
- Experience in designing and managing large-scale data pipelines and workflows.
- Excellent problem-solving and troubleshooting skills.
- Effective communication and collaboration abilities to collaborate with diverse teams and stakeholders.
- Timeline centric mindset
- Enterprise application awareness and technical alignment standards
- This position requires (6C) personnel security screening in accordance with the U.S. Department of Education’s (ED) policy regarding the personnel security screening requirements for all contractor and subcontractor employees. A qualified applicant must successfully submit for personnel security screening within 14 calendar days from employment offer.
PREFERRED QUALIFICATIONS:
- Experience with Cloudera Data Platform (CDP), including Hive and Impala
- Knowledge of Precisely Connect for Big Data or similar tools for mainframe data transformation
Top Skills
Apache Airflow
Apache Iceberg
Spark
Azure Devops
Cloudera Data Platform
Hive
Impala
Linux
Precisely Connect For Big Data
Python
SQL
Similar Jobs
Fintech • Machine Learning • Payments • Software • Financial Services
The Distinguished Data Engineer at Capital One will enhance technology awareness, lead projects, mentor teams, and collaborate on data engineering solutions.
Top Skills:
AirflowAws GlueDatabricksGoJavaKafkaKinesisPythonSnowflakeSparkSQL
Fintech • Mobile • Software • Financial Services
The Senior Data Engineer will design, build, and maintain data solutions for risk management, focusing on data warehousing and pipeline development. They will mentor other engineers and drive data strategy.
Top Skills:
AirtableAnsibleApache AirflowApache KafkaCloudFormationDbtGithub ActionsGitlab Ci/CdPythonSnowflakeSQLTerraform
Consumer Web • eCommerce • Food • Healthtech • Natural Language Processing • Social Impact
The role involves architecting and developing Python-based data applications for business process automation, requiring collaboration and technical mentoring within a data engineering team.
Top Skills:
AirflowAWSDbtFivetranJenkinsPythonSnowflakeSQLTerraform
What you need to know about the Austin Tech Scene
Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center



