Browse
···
Log in / Register

Developer Advocate Engineer, Hub/Enterprise - US Remote

Negotiable Salary

Hugging Face

New York, NY, USA

Favourites
Share

Description

At Hugging Face, we’re on a journey to democratize good AI. We are building the fastest growing platform for AI builders with over 5 million users & 100k organizations who collectively shared over 1M models, 300k datasets & 300k apps. Our open-source libraries have more than 400k+ stars on Github. About the Role As a Developer Advocate Engineer, you will be a pivotal force in driving the adoption and amplifying awareness of the Hugging Face Hub and our paid offerings (like the Hugging Face Enterprise Hub and our new Inference Providers feature). Your primary focus will be on creating compelling technical content, to showcase the Hub's capabilities and our paid solutions. This position involves working collaboratively across various teams, including Product, Infrastructure and Open Source.In day-to-day, this role will primarily involve: Technical Content Creation: Authoring documentation, blog posts, tutorials, articles, creating engaging videos and presentations for conferences and webinars... All content will aim to educate, inspire, and convert potential users into active adopters of the Hub and our Enterprise solutions, clearly articulating the benefits for different user and enterprise needs. Engaging Demos and Prototypes: Designing and developing compelling demos and prototypes that showcase the power and versatility of the Hugging Face Hub, with easy-to-use code repositories to allow developers to quickly replicate and build upon your examples. These demos will illustrate pathways to value, for instance leveraging our Inference Providers for streamlined access to a multitude of models. Ecosystem Engagement and Awareness Building: Proactively identify and cultivate relationships within the developer ecosystem. Share your content and demos widely, engage in relevant communities, and participate in events to increase visibility, build awareness, and drive new user acquisition for the Hub and its paid features. About you You’ll work with a supportive team of engineers and a vast ecosystem while enjoying a lot of autonomy. This position is ideal for someone who is: An Exceptional Content Creator and Communicator: You have a proven ability to create clear, engaging, and technically accurate content (written, code, video) that resonates with developers. You excel at explaining complex topics simply. Generalist Engineer with Developer Empathy: Versatile and empathetic, with a strong understanding of developers' needs, pain points, and learning processes. Enthusiastic Evangelist for ML-Focused Products: Passionate about working on cutting-edge ML products and proactively demonstrating their value to diverse audiences through compelling content and demos. Developer-First and Growth-Oriented Mindset: Dedicated to empowering developers through excellent resources, with a strong focus on driving adoption and expanding our community through educational outreach. Adoption, Accessibility, and Awareness Advocate: Deeply committed to creating resources that are not only technically sound but also highly accessible, discoverable, and effective in driving widespread adoption. Product, Technical, and Content Ownership: Possesses a strong sense of ownership and responsibility for the quality and impact of your content and its role in the product's success in the market. If you're interested in joining us but don't tick every box above, we still encourage you to apply! We're building a diverse team whose skills, experiences, and backgrounds complement one another. We're happy to consider where you might be able to make the biggest impact. More about Hugging Face We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community, as well as the future of machine learning more broadly. Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status. We value development. You will work with some of the smartest people in our industry. We are an organization that has a bias for impact and is always challenging ourselves to grow continuously. We provide all employees with reimbursement for relevant conferences, training, and education. We care about your well-being. We offer flexible working hours and remote options. We offer health, dental, and vision benefits for employees and their dependents. We also offer parental leave and flexible paid time off. We support our employees wherever they are. While we have office spaces in NYC and Paris, we're very distributed, and all remote employees have the opportunity to visit our offices. If needed, we'll also outfit your workstation to ensure you succeed. We want our teammates to be shareholders. All employees have company equity as part of their compensation package. If we succeed in becoming a category-defining platform in machine learning and artificial intelligence, everyone enjoys the upside.

Source:  workable View original post

Location
New York, NY, USA
Show map

workable

You may also like

Workable
Machine Learning Engineer, ML Runtime & Optimization
Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. Pony.ai’s leading position has been recognized, with CNBC ranking Pony.ai #10 on its CNBC Disruptor list of the 50 most innovative and disruptive tech companies of 2022. In June 2023, Pony.ai was recognized on the XPRIZE and Bessemer Venture Partners inaugural “XB100” 2023 list of the world’s top 100 private deep tech companies, ranking #12 globally. As of August 2023, Pony.ai has accumulated nearly 21 million miles of autonomous driving globally. Pony.ai went public at NASDAQ in Nov. 2024. Responsibility The ML Infrastructure team at Pony.ai provides a set of tools to support and automate the lifecycle of the AI workflow, including model development, evaluation, optimization, deployment, and monitoring. As a Machine Learning Engineer in ML Runtime & Optimization, you will be developing technologies to accelerate the training and inferences of the AI models in autonomous driving systems. This includes: Identifying key applications for current and future autonomous driving problems and performing in-depth analysis and optimization to ensure the best possible performance on current and next-generation compute architectures. Collaborating closely with diverse groups in Pony.ai including both hardware and software to optimize and craft core parallel algorithms as well as to influence the next-generation compute platform architecture design and software infrastructure. Apply model optimization and efficient deep learning techniques to models and optimized ML operator libraries. Work across the entire ML framework/compiler stack (e.g.Torch, CUDA and TensorRT), and system-efficient deep learning models. Requirements BS/MS or Ph.D in computer science, electrical engineering or a related discipline. Strong programming skills in C/C++ or Python. Experience on model optimization, quantization or other efficient deep learning techniques Good understanding of hardware performance, regarding CPU or GPU execution model, threads, registers, cache, cost/performance trade-off, etc. Experience with profiling, benchmarking and validating performance for complex computing architectures. Experience in optimizing the utilization of compute resources, identifying and resolving compute and data flow bottlenecks. Strong communication skills and ability to work cross-functionally between software and hardware teams Preferred Qualifications: One or more of the following fields are preferred Experience with parallel programming, ideally CUDA, OpenCL or OpenACC. Experience in computer vision, machine learning and deep learning. Strong knowledge of software design, programming techniques and algorithms. Good knowledge of common deep learning frameworks and libraries. Deep knowledge on system performance, GPU optimization or ML compiler. Compensation and Benefits Base Salary Range: $140,000 - $250,000 Annually Compensation may vary outside of this range depending on many factors, including the candidate’s qualifications, skills, competencies, experience, and location. Base pay is one part of the Total Compensation and this role may be eligible for bonuses/incentives and restricted stock units. Also, we provide the following benefits to the eligible employees: Health Care Plan (Medical, Dental & Vision) Retirement Plan (Traditional and Roth 401k) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Free Food & Snacks
Fremont, CA, USA
$140,000-250,000/year
Workable
C++ Market Data Engineer
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run. Responsibilities Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE). Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate. Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems. Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance. Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages. Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning. Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading. Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services. Requirements BS/MS/PhD in Computer Science, Electrical Engineering, or related field. 3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems. Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH). Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations. Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches. Comfort with scripting in Python for prototyping, testing, and ops automation. Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment. Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP). Benefits Competitive salary, plus bonus based on individual and company performance. Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets. PPO Health, dental and vision insurance premiums fully covered for you and your dependents. Pre-Tax Commuter Benefits  Trexquant is an Equal Opportunity Employer
Stamford, CT, USA
Negotiable Salary
Workable
Low-Latency Developer
Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Low-Latency Developer to join our team. We are expanding an international, diverse team, with experts in trading, statistics, engineering, and technology. Our disciplined approach combined with rapid market feedback allows us to quickly turn ideas into profit. Our environment of learning and collaboration allows us to solve some of the world’s hardest problems, together. As a small firm, we remain nimble and hold ourselves to the highest standards of integrity, ingenuity, and effort.  Position Highlights: We are modernizing our trading and research platform to scale our alpha trading business. This platform will enable researchers to explore, test, and deploy sophisticated signals, models, and strategies across asset classes in a robust, fully automated manner while maintaining competitive latency targets. As a Low-Latency Developer, you will be responsible for designing, optimizing, and maintaining high-performance trading systems to minimize latency. Your Mission and Goals: Analyze and optimize the performance of low-latency trading systems by identifying bottlenecks and inefficiencies in the code, and implementing effective solutions.  Develop and adapt the platform to support the demands of a fast-paced trading environment, while effectively managing technical debt. Requirements Over 5 years of experience as a low-latency developer with a focus on performance optimization in a high-frequency trading (HFT) environment. Experience with multiple components of an HFT platform or system, particularly those on the critical path. Experience working at an HFT firm during its startup phase and/or on a trading team is a significant plus. Technical Skills: Deep knowledge of HFT platforms: networking, kernel bypass, market data, order entry, threading, inter-process communication, and strategy APIs. Proven low-latency development and performance optimization in HFT. Strong proficiency in C++. Excellent understanding of CPU caches and cache efficiency. Experience with multithreaded and multi-process synchronization. Good understanding of networking protocols. Skilled in performance profiling and optimization tools. Advanced knowledge of Linux operating systems, including kernel-level device mechanisms. About You: Practical decision-making skills. Excellent communication skills. Strong analytical and problem-solving skills. Passion for trading. Ability to work independently and as part of a team. Benefits Competitive rates of pay. Paid time off (5 weeks). Coverage of health insurance costs. Office lunches. Discretionary bonus system.  Annual base salary range of $150,000 to $300,000. Pay (base and bonus) may vary depending on job-related skills and experience. Our motivation: We are a company committed to staying at the forefront of technology. Our team is passionate about continual learning and improvement. With no external investors or customers, we are the primary users of the products we create, giving you the opportunity to make a real impact on our company's growth. Ready to advance your career? Join our innovative team and help shape the future of trading on a global scale. Apply now and let's create the future together!
New York, NY, USA
$150,000-300,000/year
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.