Software Engineer, Data
Acrisure Technology Group (ATG) is a fast paced, AI-driven team building innovative software to disrupt the $6T+ insurance industry. Our mission is to help the world share its risk more intelligently to power a more vibrant economy. To do this, we are transforming insurance distribution and underwriting into a science.
At the core of our operating model is our technology: we’re building the premier AI Factory in the world for risk and applying it at the center of Acrisure, a privately held company recognized as one of the world's top 10 insurance brokerages and the fastest growing insurance brokerage globally. By using the latest technology and advances in AI to push the boundaries of understanding risk, we are systematically converting data into predictions, insights, and choices, and we believe we can remove the constraints associated with scale, scope, and learning that have existed in the insurance industry for centuries.
We are a small team of extremely high caliber engineers with a diverse background across industries and technologies. Our engineers have worked at large companies like Google and Amazon, high frequency trading companies like Two Sigma and Jump Trading, and a variety of smaller startups, including successful startup founders.
The Role
As Software Engineer, Data Engineering at ATG, you’ll be an essential part of the team building world-class software to transform the insurance industry. You will work collaboratively as part of a cross-functional team, including AI researchers, AI engineers, product managers to design and implement new data processing systems that enable both cutting edge research, and high quality user experiences. As a successful candidate, you will take full advantage of state-of-the-art tools, conceiving of new ones when the right solution does not yet exist and act with a sense of urgency and agility to deliver value. You are driven by a passion for improving the world through technology and delighting users. Help us to turn our vision into reality.
Our technology runs on Google Cloud and is configured with Kubernetes, leveraging various services in that environment. Our data storage layer includes BigQuery, BigTable, and Postgres. We code primarily in Kotlin, Python, Java, and JavaScript and make use of many frameworks, including Dataflow, Cloud AI Platform, KubeFlow, Spring, and React.
Here are some of the ways in which you’ll achieve impact:
- Build efficient and reliable technology with a customer-first mindset.
- Collaborate with researchers and engineers to design and implement efficient data pipelines
- Assist in designing and maintaining our tech stack, building when it makes sense, inventing when necessary, and upgrading as tools evolve.
- Identify, adopt, and evangelize best practices.
- Advocate for and identify creative implementations to optimize business impact.
- Measure the effectiveness of new features, find and address performance issues, and drive continuous improvement.
- Measure the effectiveness of new tools, find and address performance issues, and drive continuous improvement.
- Utilize metrics and data to make the best possible decisions.
- Build data expertise and own data quality metrics.
- Work collegially and effectively as we grow a world-class, diverse engineering team.
You may be fit for this role if you:
- Possess strong knowledge of computer science fundamentals and software engineering principles, as well as expertise with distributed data processing.
- Have experience writing recurring data ingestion and validation pipelines across multiple data sources.
- 2+ years of experience in a Data Engineering or similar role using modern data processing techniques
- Have experience with Google Cloud Platform (BigQuery, BigTable, Apache Beam/DataFlow), as well as other high volume data processing tools such as Kafka.
- Have proficiency in SQL
- Possess empathy, kindness towards others, a positive attitude, and self-awareness.
- Have a Bachelor’s degree in Computer Science or a related field, or equivalent experience.
- Are excited to work for an early stage company to experiment, discover product-market fit and are focused on maximizing business impact.
- Are willing and able to work from the headquarters in Austin, Texas (preferred), with remote roles considered for the right candidate.
It’s not expected that any single candidate would have expertise across all of these areas. If you are a solid engineer and eager to work on data or insurance related platforms/products, we are eager to talk to you.