Browse
···
Log in / Register

Senior Data Engineer

Negotiable Salary

Plum Inc

San Francisco, CA, USA

Favourites
Share

Description

PLUM is a fintech company empowering financial institutions to grow their business through a cutting-edge suite of AI-driven software, purpose-built for lenders and their partners across the financial ecosystem. We are a boutique firm, where each person’s contributions and ideas are critical to the growth of the company.  This is a fully remote position, open to candidates anywhere in the U.S. with a reliable internet connection. While we gather in person a few times a year, this role is designed to remain remote long-term. You will have autonomy and flexibility in a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action. You'll collaborate with a high-performing team — including sales, marketers, and financial services experts —  who stay connected through Slack, video calls, and regular team and company-wide meetings. We’re a team that knows how to work hard, have fun, and make a meaningful impact—both together and individually. Job Summary We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines that ingest and process data from a variety of external client systems. This role is critical in building the data infrastructure that powers Plum’s next-generation AI-driven products. You will work with a modern data stack including Python, Databricks, AWS, Delta Lake, and more. As a senior member of the team, you’ll take ownership of architectural decisions, system design, and production readiness—working with team members to ensure data is reliable, accessible, and impactful. Key Responsibilities Design and architect end-to-end data processing pipelines: ingestion, transformation, and delivery to the Delta Lakehouse. Integrate with external systems (e.g., CRMs, file systems, APIs) to automate ingestion of diverse data sources. Develop robust data workflows using Python and Databricks Workflows. Implement modular, maintainable ETL processes following SDLC best practices and Git-based version control. Contribute to the evolution of our Lakehouse architecture to support downstream analytics and machine learning use cases. Monitor, troubleshoot, and optimize data workflows in production. Collaborate with cross-functional teams to translate data needs into scalable solutions. Requirements Master’s degree in Computer Science, Engineering, Physics, or a related technical field or equivalent work experience. 3+ years of experience building and maintaining production-grade data pipelines. Proven expertise in Python and SQL for data engineering tasks. Strong understanding of lakehouse architecture and data modeling concepts. Experience working with Databricks, Delta Lake, and Apache Spark. Hands-on experience with AWS cloud infrastructure. Track record of integrating data from external systems, APIs, and databases. Strong problem-solving skills and ability to lead through ambiguity. Excellent communication and documentation habits. Preferred Qualifications Experience building data solutions in Fintech, Sales Tech, or Marketing Tech domains. Familiarity with CRM platforms (e.g., Salesforce, HubSpot) and CRM data models. Experience using ETL tools such as Fivetran or Airbyte. Understanding of data governance, security, and compliance best practices. Benefits A fast-paced, collaborative startup culture with high visibility. Autonomy, flexibility, and a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action.  Opportunity to make a meaningful impact in building a company and culture.  Equity in a financial technology startup.  Generous health, dental, and vision coverage for employees and family members + 401K. Eleven paid holidays and unlimited discretionary vacation days. Competitive compensation and bonus potential.

Source:  workable View original post

Location
San Francisco, CA, USA
Show map

workable

You may also like

Workable
Low-Latency Developer
Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Low-Latency Developer to join our team. We are expanding an international, diverse team, with experts in trading, statistics, engineering, and technology. Our disciplined approach combined with rapid market feedback allows us to quickly turn ideas into profit. Our environment of learning and collaboration allows us to solve some of the world’s hardest problems, together. As a small firm, we remain nimble and hold ourselves to the highest standards of integrity, ingenuity, and effort.  Position Highlights: We are modernizing our trading and research platform to scale our alpha trading business. This platform will enable researchers to explore, test, and deploy sophisticated signals, models, and strategies across asset classes in a robust, fully automated manner while maintaining competitive latency targets. As a Low-Latency Developer, you will be responsible for designing, optimizing, and maintaining high-performance trading systems to minimize latency. Your Mission and Goals: Analyze and optimize the performance of low-latency trading systems by identifying bottlenecks and inefficiencies in the code, and implementing effective solutions.  Develop and adapt the platform to support the demands of a fast-paced trading environment, while effectively managing technical debt. Requirements Over 5 years of experience as a low-latency developer with a focus on performance optimization in a high-frequency trading (HFT) environment. Experience with multiple components of an HFT platform or system, particularly those on the critical path. Experience working at an HFT firm during its startup phase and/or on a trading team is a significant plus. Technical Skills: Deep knowledge of HFT platforms: networking, kernel bypass, market data, order entry, threading, inter-process communication, and strategy APIs. Proven low-latency development and performance optimization in HFT. Strong proficiency in C++. Excellent understanding of CPU caches and cache efficiency. Experience with multithreaded and multi-process synchronization. Good understanding of networking protocols. Skilled in performance profiling and optimization tools. Advanced knowledge of Linux operating systems, including kernel-level device mechanisms. About You: Practical decision-making skills. Excellent communication skills. Strong analytical and problem-solving skills. Passion for trading. Ability to work independently and as part of a team. Benefits Competitive rates of pay. Paid time off (5 weeks). Coverage of health insurance costs. Office lunches. Discretionary bonus system.  Annual base salary range of $150,000 to $300,000. Pay (base and bonus) may vary depending on job-related skills and experience. Our motivation: We are a company committed to staying at the forefront of technology. Our team is passionate about continual learning and improvement. With no external investors or customers, we are the primary users of the products we create, giving you the opportunity to make a real impact on our company's growth. Ready to advance your career? Join our innovative team and help shape the future of trading on a global scale. Apply now and let's create the future together!
New York, NY, USA
$150,000-300,000/year
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.