Pickle
Senior Data Engineer

New York, NY, USA
Must be located in NYC (we are in office Monday - Thursday). Please apply by emailing recruiting@shoponpickle.com with “Senior Data Engineer” in the subject line and include the following:
Share 1-3 (max) bullets on why you think you’re a standout applicant for this role.
Share 1-2 (max) bullets summarizing an initiative you’re most proud of and the impact it drove (we love metrics!)
Please attach your resume
Pickle is a rental marketplace that aims to monetize the billions of underutilized assets sitting in consumers closets and brands inventory. Users can easily tap into shared closets within their community through flexible and/or on-demand delivery options. Our goal is to provide affordable and convenient access to quality items exactly when our users need them. We are starting with P2P clothing/accessories and expanding to other categories.
Role Overview:
As our first dedicated Senior Data Engineer, you will architect and scale the data hub that fuels every insight, experiment, and decision across the company. You’ll own the full lifecycle of our data platform—from ingestion and modeling to governance and observability—while partnering closely with Product, Growth, and Engineering to unlock self-serve analytics and advanced use cases.
Requirements
6 + years of professional data-engineering experience, designing and running production-grade ELT/ETL pipelines
Deep expertise with Snowflake or similar tool (warehouse design, performance tuning, security, and cost management)
Strong SQL plus proficiency in Python (or TypeScript/Node.js) for data transformation and orchestration
Hands-on experience with modern data-stack tooling (e.g., dbt, Airflow/Prefect, Fivetran/Hevo, Kafka/Kinesis)
Solid grasp of data-governance concepts (catalogs, lineage, quality testing, RBAC, PII handling)
Cloud infrastructure skills—AWS preferred (S3, ECS/Lambda, IAM, Secrets Manager, CloudWatch)
Proven track record of setting up data infrastructure and analytics processes from the ground up in a high-growth or startup environment
Excellent communication skills and the ability to translate business questions into scalable datasets and metrics
Bonus: experience supporting experimentation frameworks, real-time/event-driven pipelines, or ML feature stores
Key Responsibilities
Own the data platform roadmap: design, build, and scale the pipelines and orchestration that feed Pickle’s central data hub
Stand up analytics infrastructure: establish best-practice processes for data modeling, testing, CI/CD, and observability to ensure reliable, timely data
Implement and Optimize Snowflake (or similar tool): tune performance, manage costs, and leverage advanced features (e.g., zero-copy clones, data sharing)
Champion data governance & security: implement standards for documentation, data quality, lineage, and access control
Enable self-service analytics: partner with analysts and stakeholders to expose trusted datasets and metrics in BI tools and experimentation platforms
Collaborate across functions: work with Product, Growth, Operations, and Engineering to translate business requirements into data assets that drive conversion, retention, and liquidity
Mentor and elevate the team: share best practices, review code, and influence architecture and tooling decisions as we scale
Measure and monitor: define SLAs/SLOs for data freshness and accuracy; set up dashboards and alerts to maintain platform health
Benefits
Competitive compensation and equity
Healthcare (Medical, Dental, Vision)
Take what you need paid time off
Meal Pal credits to cover the cost of lunch
Stipend to help set up your desk and office environment
Work directly with the founders and executive team
Professional coaching, training, and development
Grow with the company
Pickle credits for our employees, we love when the team uses Pickle!
Fun team events and company parties
Company offsites
Office space in NYC
Negotiable Salary