This position can be remote, but US based candidates only.
Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.
DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!
Want to learn more about who we are? Check us out here!
Dealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.
We are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications
- 2-5 years experience as a data engineer in a professional setting
- Knowledge of the ETL process and patterns of periodic and real time data pipelines
- Experience with data types and data transfer between platforms
- Proficiency with Python and related libraries to support the ETL process
- Working knowledge of SQL
- Experience with linux based systems console (bash, etc.)
- Knowledge of cloud based AWS resources such as EC2, S3, and RDS
- Able to work closely with data scientists on the demand side
- Able to work closely with domain experts and data source owners on the supply side
- An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.
- College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics)
- Experience with Apache Kafka, Spark, Ignite and/or other big data tools
- Experience with Java Script, Node.js, PHP and other web technologies.
- Working knowledge of Java or Scala
- Familiarity with tools such as Packer, Terraform, and CloudFormation
What we are looking for in a candidate:
- Experience with data engineering, Python and SQL
- Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform
- A person who loves data and all things data related, AKA a self described data geek
- Enthusiasm and a “get it done” attitude!
- Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), Eye Med Vision
- 401k plan with company match
- Tuition Reimbursement
- 13 days paid time off, parental leave, and selected paid holidays
- Life and Disability Insurance
- Subsidized gym membership
- Subsidized internet access for your home
- Peer-to-Peer Bonus program
- Work from home Fridays
- Weekly in-office yoga classes
- Fully stocked kitchen and refrigerator
*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.