We're building the data infrastructure that makes AI agents trustworthy instead of error-prone.
We provide continuously refreshed, verified B2B data for autonomous AI agents and GTM workflows.
We've tripled growth while maintaining 100% gross dollar retention and staying cashflow positive.
We power AI agents for Clay, Orbital, Dun & Bradstreet, and the next generation of AI GTM tools.
Our data platform is scaling rapidly, and we need experienced data engineers and architects who can design systems that handle massive data volumes reliably and efficiently.
This role exists to strengthen our core data infrastructure, improve scalability and correctness, and ensure our datasets remain accurate, fresh, and trusted as customer usage grows.
You’ll work on a platform where data quality, reliability, and architecture directly impact customer trust and retention.
Design, build, and maintain scalable data pipelines handling high-volume structured and semi-structured data.
Own data infrastructure end-to-end, from ingestion and storage through transformation, validation, and delivery.
Architect and optimize data systems using Snowflake, S3, and modern cloud data stacks.
Ensure data freshness, accuracy, and consistency across production systems.
Collaborate closely with backend, frontend, and AI engineers to support product and customer use cases.
Define and enforce best practices around data modeling, schema design, and data quality checks.
Continuously improve performance, cost efficiency, observability, and reliability of the data platform.
Help raise the overall data engineering bar as the company and datasets scale.
You have 5+ years of professional experience in data engineering and/or data architecture roles.
Strong fundamentals in data modeling, ETL/ELT design, and distributed data systems.
Hands-on experience with:
Snowflake
Cloud object storage (AWS S3 or equivalent)
High-volume batch and/or streaming data pipelines
SQL and data transformation frameworks
Solid understanding of:
Data warehousing concepts
Data reliability, validation, and observability
Cloud infrastructure and cost-performance tradeoffs
Experience working with large-scale, frequently updated datasets.
Comfortable operating in a fast-moving, remote-first environment.
You take ownership, communicate clearly, and design systems meant to last.
Preferred (not required):
Familiarity with B2B data, GTM data, or enrichment pipelines
Experience supporting AI/ML or agent-driven workflows
Product with real traction: Customers rely on our platform in production.
High ownership: Small team where your work directly shapes the product.
Engineering-driven culture: Quality and correctness matter.
Growth stage company: Clear product-market fit and momentum.
Impact over process: Less bureaucracy, more building.
Competitive compensation based on experience.
Meaningful ownership and long-term growth opportunities.
Flexible working hours.
Fully remote-friendly team.
Direct collaboration with founders and core engineering leadership.
Top Skills
Similar Jobs
What you need to know about the Austin Tech Scene
Key Facts About Austin Tech
- Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
- Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
- Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
- Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center


.png)
