Browse
···
Log in / Register

Senior Data Engineer

Negotiable Salary

Plum Inc

San Francisco, CA, USA

Favourites
Share

Description

PLUM is a fintech company empowering financial institutions to grow their business through a cutting-edge suite of AI-driven software, purpose-built for lenders and their partners across the financial ecosystem. We are a boutique firm, where each person’s contributions and ideas are critical to the growth of the company.  This is a fully remote position, open to candidates anywhere in the U.S. with a reliable internet connection. While we gather in person a few times a year, this role is designed to remain remote long-term. You will have autonomy and flexibility in a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action. You'll collaborate with a high-performing team — including sales, marketers, and financial services experts —  who stay connected through Slack, video calls, and regular team and company-wide meetings. We’re a team that knows how to work hard, have fun, and make a meaningful impact—both together and individually. Job Summary We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines that ingest and process data from a variety of external client systems. This role is critical in building the data infrastructure that powers Plum’s next-generation AI-driven products. You will work with a modern data stack including Python, Databricks, AWS, Delta Lake, and more. As a senior member of the team, you’ll take ownership of architectural decisions, system design, and production readiness—working with team members to ensure data is reliable, accessible, and impactful. Key Responsibilities Design and architect end-to-end data processing pipelines: ingestion, transformation, and delivery to the Delta Lakehouse. Integrate with external systems (e.g., CRMs, file systems, APIs) to automate ingestion of diverse data sources. Develop robust data workflows using Python and Databricks Workflows. Implement modular, maintainable ETL processes following SDLC best practices and Git-based version control. Contribute to the evolution of our Lakehouse architecture to support downstream analytics and machine learning use cases. Monitor, troubleshoot, and optimize data workflows in production. Collaborate with cross-functional teams to translate data needs into scalable solutions. Requirements Master’s degree in Computer Science, Engineering, Physics, or a related technical field or equivalent work experience. 3+ years of experience building and maintaining production-grade data pipelines. Proven expertise in Python and SQL for data engineering tasks. Strong understanding of lakehouse architecture and data modeling concepts. Experience working with Databricks, Delta Lake, and Apache Spark. Hands-on experience with AWS cloud infrastructure. Track record of integrating data from external systems, APIs, and databases. Strong problem-solving skills and ability to lead through ambiguity. Excellent communication and documentation habits. Preferred Qualifications Experience building data solutions in Fintech, Sales Tech, or Marketing Tech domains. Familiarity with CRM platforms (e.g., Salesforce, HubSpot) and CRM data models. Experience using ETL tools such as Fivetran or Airbyte. Understanding of data governance, security, and compliance best practices. Benefits A fast-paced, collaborative startup culture with high visibility. Autonomy, flexibility, and a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action.  Opportunity to make a meaningful impact in building a company and culture.  Equity in a financial technology startup.  Generous health, dental, and vision coverage for employees and family members + 401K. Eleven paid holidays and unlimited discretionary vacation days. Competitive compensation and bonus potential.

Source:  workable View original post

Location
San Francisco, CA, USA
Show map

workable

You may also like

Workable
Senior Backend Platform Engineer, Distributed Systems
Help redefine how the DoD makes multi-billion-dollar force-design decisions. In this role, you'll fuse physics-driven simulation, interactive computing, and verified AI code-generation to create next-generation wargaming platforms. If building ultra-low-latency APIs, taming high-volume geospatial data, and leading with clean, production-ready Python gets you fired up, let’s talk. What you’ll do Own the service layer—design, build, and scale FastAPI micro-services and lightning-fast ZeroMQ messaging pipelines running in Kubernetes and bare-metal clusters. Wrangle data at speed & scale—shape and query multi-TB Postgres/PostGIS datasets, orchestrate Redis for sub-millisecond state, and keep everything rock-solid under bursty load. Glue the stack together—expose crisp, well-versioned REST & WebSocket endpoints for the frontend crew and simulation kernel. Ship continuously—automate CI/CD, observability, and security hardening to DoD standards; push to prod with confidence. Lead by doing—drive code reviews, mentor teammates, and set the standard for test coverage and documentation. Why Code Metal? Mission with impact: your APIs become the nervous system of digital battlefields influencing multi-billion-dollar defense acquisitions. Velocity: tight-knit teams, weekly releases, zero bureaucratic drag. Ownership: no passengers—every engineer ships code that matters. Requirements Must-have credentials 4+ years building production backends in modern Python, with deep FastAPI (or equivalent async framework) experience. Proven expertise in ZeroMQ, NATS, Kafka, or similar high-throughput messaging systems. Hands-on with Postgres/PostGIS and Redis in performance-critical workloads. Cloud-native chops—Docker, Kubernetes, and one major provider (AWS, GCP, Azure, or GovCloud). Active Secret clearance or eligibility to obtain one.  Bonus points C++ or Rust skills for bridging high-performance simulation modules. Hardened services for FedRAMP / STIG compliance. Observability mindset—Prometheus, Grafana, OpenTelemetry. TS/SCI clearance. Benefits Health care plan with 100% premium coverage, including medical, dental, and vision. 401k with 5% matching. Paid Time Off (Uncapped Vacation, plus Sick & Public Holidays). Flexible hybrid work arrangement. Relocation assistance for qualifying employees.
Boston, MA, USA
Negotiable Salary
Workable
Senior Big Data Engineer
ABOUT US: Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint. We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.  Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.  KEY RESPONSIBILITIES Develop and maintain the Big Data Platform by performing data cleansing, data warehouse modeling, and report development on large datasets. Collaborate with cross-functional teams to provide actionable insights for decision-making. Manage the operation and administration of the Big Data Platform, including system deployment, task scheduling, proactive monitoring, and alerting to ensure stability and security. Handle data collection and integration tasks, including ETL development, data de-identification, and managing data security. Provide support for other departments by processing data, writing queries, developing solutions, performing statistical analysis, and generating reports. Troubleshoot and resolve critical issues, conduct fault diagnosis, and optimize system performance. Requirements REQUIRED QUALIFICATIONS Bachelor’s degree or higher in Computer Science or a related field, with at least three years of experience maintaining a Big Data platform. Strong understanding of Big Data technologies such as Hadoop, Flink, Spark, Hive, HBase, and Airflow, with proven expertise in Big Data development and performance optimization. Familiarity with Big Data OLAP tools like Kylin, Impala, and ClickHouse, as well as experience in data warehouse design, data modeling, and report generation. Proficiency in Linux development environments and Python programming. Excellent communication, collaboration, and teamwork skills, with a proactive attitude and a strong sense of responsibility. PREFERRED QUALIFICAITONS Experience with cloud-based deployments, particularly AWS EMR, with familiarity in other cloud platforms being a plus. Proficiency in additional languages such as Java or Scala is a plus. Benefits Salary Range: $150,000 - $180,000 Free snacks and drinks, and provided lunch on Fridays Fully paid medical, dental, and vision insurance (partial coverage for dependents) Contributions to 401k funds Bi-annual reviews, and annual pay increases Health and wellness benefits, including free gym membership Quarterly team-building events At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc. Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
Irvine, CA, USA
$150,000-180,000/year
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.