Browse
···
Log in / Register

C++ Market Data Engineer

Negotiable Salary

Trexquant Investment

Stamford, CT, USA

Favourites
Share

Description

Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run. Responsibilities Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE). Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate. Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems. Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance. Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages. Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning. Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading. Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services. Requirements BS/MS/PhD in Computer Science, Electrical Engineering, or related field. 3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems. Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH). Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations. Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches. Comfort with scripting in Python for prototyping, testing, and ops automation. Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment. Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP). Benefits Competitive salary, plus bonus based on individual and company performance. Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets. PPO Health, dental and vision insurance premiums fully covered for you and your dependents. Pre-Tax Commuter Benefits  Trexquant is an Equal Opportunity Employer

Source:  workable View original post

Location
Stamford, CT, USA
Show map

workable

You may also like

Workable
Senior Data Engineer
PLUM is a fintech company empowering financial institutions to grow their business through a cutting-edge suite of AI-driven software, purpose-built for lenders and their partners across the financial ecosystem. We are a boutique firm, where each person’s contributions and ideas are critical to the growth of the company.  This is a fully remote position, open to candidates anywhere in the U.S. with a reliable internet connection. While we gather in person a few times a year, this role is designed to remain remote long-term. You will have autonomy and flexibility in a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action. You'll collaborate with a high-performing team — including sales, marketers, and financial services experts —  who stay connected through Slack, video calls, and regular team and company-wide meetings. We’re a team that knows how to work hard, have fun, and make a meaningful impact—both together and individually. Job Summary We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines that ingest and process data from a variety of external client systems. This role is critical in building the data infrastructure that powers Plum’s next-generation AI-driven products. You will work with a modern data stack including Python, Databricks, AWS, Delta Lake, and more. As a senior member of the team, you’ll take ownership of architectural decisions, system design, and production readiness—working with team members to ensure data is reliable, accessible, and impactful. Key Responsibilities Design and architect end-to-end data processing pipelines: ingestion, transformation, and delivery to the Delta Lakehouse. Integrate with external systems (e.g., CRMs, file systems, APIs) to automate ingestion of diverse data sources. Develop robust data workflows using Python and Databricks Workflows. Implement modular, maintainable ETL processes following SDLC best practices and Git-based version control. Contribute to the evolution of our Lakehouse architecture to support downstream analytics and machine learning use cases. Monitor, troubleshoot, and optimize data workflows in production. Collaborate with cross-functional teams to translate data needs into scalable solutions. Requirements Master’s degree in Computer Science, Engineering, Physics, or a related technical field or equivalent work experience. 3+ years of experience building and maintaining production-grade data pipelines. Proven expertise in Python and SQL for data engineering tasks. Strong understanding of lakehouse architecture and data modeling concepts. Experience working with Databricks, Delta Lake, and Apache Spark. Hands-on experience with AWS cloud infrastructure. Track record of integrating data from external systems, APIs, and databases. Strong problem-solving skills and ability to lead through ambiguity. Excellent communication and documentation habits. Preferred Qualifications Experience building data solutions in Fintech, Sales Tech, or Marketing Tech domains. Familiarity with CRM platforms (e.g., Salesforce, HubSpot) and CRM data models. Experience using ETL tools such as Fivetran or Airbyte. Understanding of data governance, security, and compliance best practices. Benefits A fast-paced, collaborative startup culture with high visibility. Autonomy, flexibility, and a flat corporate structure that gives you the opportunity for your direct input to be realized and put into action.  Opportunity to make a meaningful impact in building a company and culture.  Equity in a financial technology startup.  Generous health, dental, and vision coverage for employees and family members + 401K. Eleven paid holidays and unlimited discretionary vacation days. Competitive compensation and bonus potential.
San Francisco, CA, USA
Negotiable Salary
Workable
Data Engineer - Job ID: DE2025
Build the data foundation that powers patient‑centric decisions Ascendis Pharma is a growth‑stage biopharmaceutical company guided by the core values Patients · Science · Passion. To strengthen our cloud‑native commercial and medical data platform, we are looking for a Data Engineer who combines solid data‑pipeline craftsmanship with a collaborative, product‑oriented mindset. You will join the same agile product team as our analysts, working in rapid product‑discovery cycles to deliver trustworthy data that drives access to life‑changing therapies. Your four focus areas 1. Data Engineering Design, build and maintain scalable ELT pipelines that ingest Patient Support Program, Specialty Pharmacy, Claims, CRM, Marketing, Medical and Financial data. Implement automated data‑quality checks, monitoring and alerting. 2. Data Modelling Develop canonical data models and dimensional marts in our up-coming Databricks Lakehouse to enable self‑service analytics. Apply best‑practice naming, documentation and version control. 3. Collaboration & Facilitation Work side‑by‑side with analysts and product owners to translate business questions into robust technical solutions. Lead whiteboard sessions and spike explorations during product discovery. 4. DevOps & Continuous Improvement Configure CI/CD pipelines for data code, automate testing and promote reusable patterns. Keep an eye on performance, cost and security, driving iterative enhancements. Key responsibilities Engineer reliable, end‑to‑end data flows using Spark SQL and Pyspark notebooks in Databricks (nice to have). Orchestrate and schedule batch & streaming jobs in Azure Data Factory or similar tools. Develop and maintain dbt models (nice to have) to standardise transformations and documentation. Implement data‑cataloguing, lineage and governance practices that meet pharma‑compliance requirements. Drive alignment with analysts to optimise queries, refine schemas and ensure metrics are consistent across dashboards to ensure stakeholders have a single source of truth for decision-making Contribute to agile ceremonies (backlog refinement, sprint reviews, retros) and actively champion a culture of data craftsmanship. Why Ascendis Mission that matters – Our pipelines accelerate insight generation that expands patient access to vital therapies. Modern stack – Azure, Databricks, dbt, GitHub Actions – with room to propose new tools. End‑to‑end ownership – Influence everything from ingestion to visualisation in a small, empowered team. Hybrid flexibility & growth – Work three days onsite, enjoy generous benefits, great snacks and pursue clear technical or leadership paths. Requirements What you bring Experience – 3–7 years in Data Engineering or Analytics Engineering, ideally in pharma, biotech or another regulated, data‑rich environment. Core toolkit – Advanced SQL and Python. Solid experience in dimensional modelling. Nice‑to‑have – Hands‑on with Databricks notebooks/Lakehouse and dbt for transformation & testing. Product mindset – Comfortable iterating fast, demoing early and measuring impact. Communication & teamwork – Able to explain trade‑offs, write clear documentation and collaborate closely with analysts, product managers and business and other stakeholders. Quality focus – Passion for clean, maintainable code, automated testing and robust data governance. Benefits 401(k) plan with company match Medical, dental, and vision plans Company-offered Life and Accidental Death & Dismemberment (AD&D) insurance Company-provided short and long-term disability benefits Unique offerings of Pet Insurance and Legal Insurance Employee Assistance Program Employee Discounts Professional Development Health Saving Account (HSA) Flexible Spending Accounts Various incentive compensation plans Accident, Critical Illness, and Hospital Indemnity Insurance   Mental Health resources Paid leave benefits for new parents A note to recruiters: We do not allow external search party solicitation.  Presentation of candidates without written permission from the Ascendis Pharma Inc Human Resources team (specifically from: Talent Acquisition Partner or Human Resources Director) is not allowed.  If this occurs your ownership of these candidates will not be acknowledged.
Princeton, NJ, USA
Negotiable Salary
Workable
Data Engineer III
Position: Data Engineer III  Location: Seattle, WA 98121  Duration: 17 Months        Job Type: Contract          Work Type:  Onsite        Job Description:      The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure.  Responsibilities  As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments.  You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data.  Design, build, and maintain scalable, reliable, and reusable data pipelines and infrastructure that support analytics, reporting, and strategic decision-making  You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards  You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.  Explore source systems, data flows, and business processes to uncover opportunities, ensure data accuracy and completeness, and drive improvements in data quality and usability.  Required Skills & Experience  7+ years of related experience.  Experience with data modeling, warehousing and building ETL pipelines  Strong experience with SQL  Experience in at least one modern scripting or programming language, such as Python, Java.  Strong analytical skills, with the ability to translate business requirements into technical data solutions.  Excellent communication skills, with the ability to collaborate across technical and business teams.  A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies.   Preferred  Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions  Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)  3+ years of reparation of data for direct use in visualization tools like Tableau experience  KPI: Meet requirements, how they action solutions, etc.:  Leadership Principles:   Deliver Results  Dive Deep  Top 3 must-have hard skills  Strong experience with SQL  Experience in at least one modern scripting or programming language, such as Python, Java.  Experience with data modeling, warehousing and building ETL pipelines 
Seattle, WA, USA
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.