Acrisure Technology Group (ATG) is a fast paced, AI-driven team building innovative software to disrupt the $6T+ insurance industry. Our mission is to help the world share its risk more intelligently to power a more vibrant economy. To do this, we are transforming insurance distribution and underwriting into a science.
At the core of our operating model is our technology: we’re building the premier AI Factory in the world for risk and applying it at the center of Acrisure, a privately held company recognized as one of the world's top 10 insurance brokerages and the fastest growing insurance brokerage globally. By using the latest technology and advances in AI to push the boundaries of understanding risk, we are systematically converting data into predictions, insights, and choices, and we believe we can remove the constraints associated with scale, scope, and learning that have existed in the insurance industry for centuries.
We are a small team of extremely high caliber engineers with a diverse background across industries and technologies. Our engineers have worked at large companies like Google and Amazon, high frequency trading companies like 2Sigma and Jump Trading, and a variety of smaller startups, including successful startup founders.
As a Software Engineer focused on Data Engineering, you’ll be an essential part of the team building world-class software to transform the insurance industry. Working closely with engineers, researchers, product and design talent, and domain experts, you will design and implement new data processing systems that enable both cutting edge research, and high quality user experiences. As a successful candidate, you will take full advantage of state-of-the-art tools, conceiving of new ones when the right solution does not yet exist. You are driven by a passion for improving the world through technology and delighting users. Help us to turn vision into reality.Here are some of the ways in which you’ll achieve impact:
- Build beautiful, efficient, and reliable technology.
- Collaborate with researchers and engineers to design and implement efficient data pipelines
- Assist in designing and maintaining our tech stack, building when it makes sense, inventing when necessary, and continually upgrading as tools evolve.
- Identify, adopt, and evangelize best practices.
- Advocate for and identify creative implementations to optimize business impact.
- Measure the effectiveness of new tools, find and address performance issues, and drive continuous improvement.
- Utilize metrics and data to make the best possible decisions.
- Build data expertise and own data quality metrics.
- Work collegially and effectively as we grow a world-class, diverse engineering team.
- Possess exceptional knowledge of computer science fundamentals and software engineering principles, as well as expertise with distributed data processing.
- Have experience writing recurring data ingestion and validation pipelines across multiple data sources.
- Have a strong knowledge of data architecture and modeling best practices.
- Have proficiency in SQL
- Are product- and customer-focused, with a passion for delighting the end user using data.
- Are excited about the opportunity to use data and AI to transform the insurance industry.
- Are entrepreneurial and action-biased; self-directed and excited to build something from scratch in a fast-paced, experimentation-driven environment.
- Have strong communication skills that allow you to be effective in a cross-functional team, as well as deliver data driven insights.
- Stamp out unnecessary complexity, harness necessary complexity, and make complex topics clear and accessible to others.
- Possess empathy, kindness towards others, a positive attitude, and self-awareness.
- Bring a unique, non-traditional perspective to enhance our team’s problem solving abilities.
- Have a Bachelor’s degree in Computer Science or a related field, or equivalent experience.
- Are willing and able to work from the headquarters in Austin, Texas (preferred), with remote roles considered for the right candidate.
- 2+ years of experience in a Data Engineering or similar role using modern data processing techniques
- Have experience with Google Cloud Platform (BigQuery, BigTable, DataFlow), as well as other data processing tools such as Kafka
- Have experience using or managing a data governance tool
- Have strong experience with Python, Java or Kotlin.
It’s not expected that any single candidate would have expertise across all of these areas. If you are a solid engineer and eager to work on data- or insurance-related platforms/products, we are eager to talk to you.