Senior Data Engineer
Company Description
Living Security helps organizations protect their sensitive data by reducing their #1 cybersecurity risk, human error. WE BELIEVE in security for the sake of the person. Computers will always augment human judgment, not the other way around. WE CREATE intelligence-driven security learning content that trains users to identify and react to current and emerging threats. WE DELIVER a meaningful experience, grounded in behavioral science, that is easy to understand and hard to forget.
Job Description
As our dream candidate, you have the technical engineering chops and soft skills to communicate highly complex data trends to organizational leaders in a way that’s easy to understand, and you have a rare curiosity and creativity. You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments, building data pipelines, and ETL. You'll collaborate with data architects on the vision and execution of our data platform.
You're willing to jump right in to help the company get the most out of our data!
Qualifications
Minimum Qualifications:
- 8+ years of experience in a Data Engineer role
- 5+ years of experience developing data pipelines or ETLs
- 5+ years of experience in Python or Java
- Experience working with Apache Spark for data transformation and aggregation
- Experience with relational SQL databases, including Snowflake and Postgres
- Experience with stream-processing systems such as Spark-Streaming
- Ability to instrument basic automation and CI/CD and experience with data pipeline and workflow management tools: Jenkins/Git, Airflow, etc
- Knowledge of TDD, automated testing principles, and testing best practices
- Strong familiarity with cloud based services (AWS)
- Previous experience with micro services architecture
- Experience with cloud managed data services
- A deep understanding of data modelling, experience with data engineering tools and platforms
Preferred/Bonus Qualifications:
- Master's degree in computer science or related discipline
- Professional certifications
- Experience with Snowflake or Redshift
- Experience with Kafka, Spark, Kinesis, and/or Hadoop
- Built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole, etc.) based Hadoop distributions
- Created big data architecture, built and operate data pipelines and maintained data storage, all within distributed systems
- Familiarity with container technologies (Docker/Kubernetes)
While we take our mission seriously, we have a lot of fun while executing! We believe our clients’ success equals our success and we think of ourselves as an extension of their team and align our goals with there’s. We're looking for someone who has a passion for our clients and a client-focused mindset for value-driven, strategic improvements to join our team.
Additional Information
- Responsible PTO policy
- Flex-time
- Paid Holidays
- Health Insurance
- Dental and vision
- 401(K)
All your information will be kept confidential according to EEO guidelines.