Data Engineer at Self Financial
At Self Financial Inc, our mission is to help millions of people establish and build credit - especially those who are considered underserved or credit invisible. This is a rare chance to join a successful, venture backed startup based in Austin.
Developing a successful fintech company requires deep collaboration across our teams, dedication, and truly disrupting the way the industry thinks about things. Our team is passionate about empowering people to take control of their credit and challenging the status quo of accessing and building credit. Understanding our consumers is core to our growth in the years to come.
How we act with each other is how we act with our customers. We are direct, transparent and respectful, and we hope you embrace that approach.
About our team:
We're a growing team of engineers working to help people build credit and savings. We’ve been busy growing our business and scaling our operations to fit our growth. We are believers in a data informed approach that balances a mix of qualitative and quantitative data as well as the human experience element.
We're looking for a self starter with a high attention to detail and experience in building rich data models for analytic consumption. This role will interact heavily with BI analysts, data consumers, and data producers to build and support data pipelines.
As a Data Engineer, you will own the development, testing, and delivery of data from sources to our various data stores. You will be responsible for setting SLAs and delivery expectations for stakeholders, and setting up appropriate monitoring and alerting in the event of an outage or data quality concern. This role will give you the opportunities to become a subject matter expert of data at Self. You will work closely with our Analytics team to establish and enforce data governance to make sure that data being used is accurate and up to date.
Our ideal candidate is curious, analytical, technically competent and has a deep appreciation for the potential data can unlock. They are driven by the desire to ensure that we continue to make the best possible decisions driven by the best possible data.
- Bachelor’s degree in computer science or bachelor’s degree in a STEM field.
- 2+ years of software industry experience (or 1+ years with Masters/PhD)
- Proficient in scripting language like Python and SQL.
- Strong experience using relational databases and working with data models..
- Experience with common software engineering tools such as Git (or other VCS), JIRA, confluence and similar platforms.
- Ability to work in a Unix-based operating system (Linux, MacOS)
- Experience working in AWS technologies like S3, redshift, athena, lambda and glue.
- Understanding of data-loading patterns from distributed file systems to warehousing databases
- Foundational experience in data architecture systems (including data warehousing and data analytics).
Roles and responsibilities:
- Own and build ETLs to support our ever expanding dimensional models.
- Design and implement conversion or raw data from multiple sources into analytic data model
- Monitor and validate fidelity of data pipelines and models
- Build features and tools that give the analytics team the resources they need to solve problems.
- Work with operations to make sure our warehousing data is secure and compliant.
- Work with our analytics team to ensure our data governance is robust.