Browse
···
Log in / Register

Senior Data Engineer

Negotiable Salary

Plum Inc

San Francisco, CA, USA

Favourites
Share

Description

PLUM is a fintech company empowering financial institutions to grow their business through a cutting-edge suite of AI-driven software, purpose-built for lenders and their partners across the financial ecosystem. We are a boutique firm, where each person’s contributions and ideas are critical to the growth of the company.  This is a fully remote position, open to candidates anywhere in the U.S. with a reliable internet connection. While we gather in person a few times a year, this role is designed to remain remote long-term. You will have autonomy and flexibility in a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action. You'll collaborate with a high-performing team — including sales, marketers, and financial services experts —  who stay connected through Slack, video calls, and regular team and company-wide meetings. We’re a team that knows how to work hard, have fun, and make a meaningful impact—both together and individually. Job Summary We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines that ingest and process data from a variety of external client systems. This role is critical in building the data infrastructure that powers Plum’s next-generation AI-driven products. You will work with a modern data stack including Python, Databricks, AWS, Delta Lake, and more. As a senior member of the team, you’ll take ownership of architectural decisions, system design, and production readiness—working with team members to ensure data is reliable, accessible, and impactful. Key Responsibilities Design and architect end-to-end data processing pipelines: ingestion, transformation, and delivery to the Delta Lakehouse. Integrate with external systems (e.g., CRMs, file systems, APIs) to automate ingestion of diverse data sources. Develop robust data workflows using Python and Databricks Workflows. Implement modular, maintainable ETL processes following SDLC best practices and Git-based version control. Contribute to the evolution of our Lakehouse architecture to support downstream analytics and machine learning use cases. Monitor, troubleshoot, and optimize data workflows in production. Collaborate with cross-functional teams to translate data needs into scalable solutions. Requirements Master’s degree in Computer Science, Engineering, Physics, or a related technical field or equivalent work experience. 3+ years of experience building and maintaining production-grade data pipelines. Proven expertise in Python and SQL for data engineering tasks. Strong understanding of lakehouse architecture and data modeling concepts. Experience working with Databricks, Delta Lake, and Apache Spark. Hands-on experience with AWS cloud infrastructure. Track record of integrating data from external systems, APIs, and databases. Strong problem-solving skills and ability to lead through ambiguity. Excellent communication and documentation habits. Preferred Qualifications Experience building data solutions in Fintech, Sales Tech, or Marketing Tech domains. Familiarity with CRM platforms (e.g., Salesforce, HubSpot) and CRM data models. Experience using ETL tools such as Fivetran or Airbyte. Understanding of data governance, security, and compliance best practices. Benefits A fast-paced, collaborative startup culture with high visibility. Autonomy, flexibility, and a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action.  Opportunity to make a meaningful impact in building a company and culture.  Equity in a financial technology startup.  Generous health, dental, and vision coverage for employees and family members + 401K. Eleven paid holidays and unlimited discretionary vacation days. Competitive compensation and bonus potential.

Source:  workable View original post

Location
San Francisco, CA, USA
Show map

workable

You may also like

Workable
SAP Payroll Technical
We are seeking a highly skilled Payroll Technical Consultant with extensive experience in US Payroll. The ideal candidate will have a strong background in ABAP HR code development and a deep understanding of Core HR and Payroll modules. This role involves designing and developing custom HR-ABAP programs, working with PNP/PNPCE logical databases, and enhancing payroll processes. • Experience: 9-15 years in payroll technical consulting. • ABAP HR Development: Proficient in ABAP HR code development. • Custom Programs: Design and write HR-ABAP custom programs; modify standard programs as needed. • Database Programming: Experience with PNP/PNPCE logical databases and cluster programming for Core HR, payroll, and time management. • Core HR & Payroll Knowledge: In-depth knowledge of Core HR and Payroll modules, including schemas, functions, and operations. • HR Tables/Clusters: Familiarity with HR, Core HR, and Payroll tables/clusters; experience in developing reports/BDC in the HR module. • Payroll Processes: Strong experience in Core HR and payroll processes, including preparing payroll reports. • Reports & Interfaces: Design and develop reports, interfaces, info types, and enhancements. • Team Development: Guide and develop team members to enhance their technical capabilities and productivity. • Forms Experience: Hands-on experience with SMART forms and HR forms. • User Exits & BAPIs: Proficient in user exits, BADi, and BAPI. • OOPS Concept: Strong understanding of object-oriented programming concepts. • Technical Documentation: Experience in requirement gathering, designing technical documents, unit testing, and code review. • Qualifications: • Bachelor’s degree in computer science, Information Technology, or a related field. • Proven experience in payroll technical consulting with a focus on US Payroll. • Excellent problem-solving skills and attention to detail. Strong communication and interpersonal skills.
Burbank, CA, USA
Negotiable Salary
Workable
Machine Learning Engineer, ML Runtime & Optimization
Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. Pony.ai’s leading position has been recognized, with CNBC ranking Pony.ai #10 on its CNBC Disruptor list of the 50 most innovative and disruptive tech companies of 2022. In June 2023, Pony.ai was recognized on the XPRIZE and Bessemer Venture Partners inaugural “XB100” 2023 list of the world’s top 100 private deep tech companies, ranking #12 globally. As of August 2023, Pony.ai has accumulated nearly 21 million miles of autonomous driving globally. Pony.ai went public at NASDAQ in Nov. 2024. Responsibility The ML Infrastructure team at Pony.ai provides a set of tools to support and automate the lifecycle of the AI workflow, including model development, evaluation, optimization, deployment, and monitoring. As a Machine Learning Engineer in ML Runtime & Optimization, you will be developing technologies to accelerate the training and inferences of the AI models in autonomous driving systems. This includes: Identifying key applications for current and future autonomous driving problems and performing in-depth analysis and optimization to ensure the best possible performance on current and next-generation compute architectures. Collaborating closely with diverse groups in Pony.ai including both hardware and software to optimize and craft core parallel algorithms as well as to influence the next-generation compute platform architecture design and software infrastructure. Apply model optimization and efficient deep learning techniques to models and optimized ML operator libraries. Work across the entire ML framework/compiler stack (e.g.Torch, CUDA and TensorRT), and system-efficient deep learning models. Requirements BS/MS or Ph.D in computer science, electrical engineering or a related discipline. Strong programming skills in C/C++ or Python. Experience on model optimization, quantization or other efficient deep learning techniques Good understanding of hardware performance, regarding CPU or GPU execution model, threads, registers, cache, cost/performance trade-off, etc. Experience with profiling, benchmarking and validating performance for complex computing architectures. Experience in optimizing the utilization of compute resources, identifying and resolving compute and data flow bottlenecks. Strong communication skills and ability to work cross-functionally between software and hardware teams Preferred Qualifications: One or more of the following fields are preferred Experience with parallel programming, ideally CUDA, OpenCL or OpenACC. Experience in computer vision, machine learning and deep learning. Strong knowledge of software design, programming techniques and algorithms. Good knowledge of common deep learning frameworks and libraries. Deep knowledge on system performance, GPU optimization or ML compiler. Compensation and Benefits Base Salary Range: $140,000 - $250,000 Annually Compensation may vary outside of this range depending on many factors, including the candidate’s qualifications, skills, competencies, experience, and location. Base pay is one part of the Total Compensation and this role may be eligible for bonuses/incentives and restricted stock units. Also, we provide the following benefits to the eligible employees: Health Care Plan (Medical, Dental & Vision) Retirement Plan (Traditional and Roth 401k) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Free Food & Snacks
Fremont, CA, USA
$140,000-250,000/year
Craigslist
CDL A Truck Driver | Home Daily!
***Apply now at drive10roads.com*** 10 Roads Express is hiring part-time Class A CDL truck drivers to serve our mission of providing reliable service for customers across America. Home Daily schedules available $5 weekend differential (Fri-Sun) $500 referral bonus (paid after referred driver completes first trip) Internal transfer program after 30 days Starting Pay: $34.81 - $39.81 Per Hour | Paid for all drive and on-duty time! Schedule: Wenatchee, WA to Spokane, WA - PT | Home Daily! Days On: Saturday & Sunday Days Off: Monday - Friday Start Times: 11:45 AM End Times: 7:20 PM Hours Per Trip: 7.5 Hours Per Week: 15 *$28.95 + $5.00 in Weekend Differential Pay (Friday to Sunday or on Federal Holidays) + $5.86 in Benefit Pay (up to 40 hours per week) Benefit Pay offsets monthly health insurance premiums. Experience / Background: Class A CDL is required 12+ months of verifiable tractor trailer driving experience No DUI in the last five years Perks and Benefits: Paid vacation after one (1) year Paid federal holidays Direct deposit ***Apply now at drive10roads.com*** For more information, please call Melissa at 844-886-5335, Ext: 1740 Please reference this Job ID #: 35500310 Roads Express, LLC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Military veterans are encouraged to apply. WENA-W-FT-9804H-07232025-2
1031 Crestwood St, Wenatchee, WA 98801, USA
$34-39/hour
Workable
C++ Market Data Engineer
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run. Responsibilities Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE). Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate. Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems. Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance. Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages. Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning. Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading. Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services. Requirements BS/MS/PhD in Computer Science, Electrical Engineering, or related field. 3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems. Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH). Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations. Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches. Comfort with scripting in Python for prototyping, testing, and ops automation. Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment. Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP). Benefits Competitive salary, plus bonus based on individual and company performance. Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets. PPO Health, dental and vision insurance premiums fully covered for you and your dependents. Pre-Tax Commuter Benefits  Trexquant is an Equal Opportunity Employer
Stamford, CT, USA
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.