Browse
···
Log in / Register

Senior Big Data Engineer

Negotiable Salary

Qode

North Carolina, USA

Favourites
Share

Description

Role Description: Senior Big Data Engineer – Scala/Spark Location: [Charlotte NC/NY/NJ/or Onsite/Hybrid] Type: [Full-time] Experience Level: Senior level only   Role Overview We are looking for a highly skilled Senior Big Data Engineer with strong expertise in Scala and Apache Spark to support our data engineering team. The ideal candidate should be able to read and refactor existing Scala Spark code, develop enhancements, create robust unit tests, and help migrate code into a modern cloud-native environment.   Key Responsibilities: Analyze and refactor existing Scala Spark code to support migration to new environments. Write and enhance Scala Spark code with a focus on performance and maintainability. Develop comprehensive unit tests to ensure code quality and stability. Work with data stored in Amazon S3 (CSV and Parquet formats). Integrate and manipulate data from MongoDB. Independently set up environments to validate and test code changes. Execute both manual and automated testing procedures. Collaborate within an Agile team and contribute to sprint planning, story grooming, and reviews. Maintain clear and proactive communication with team members, project managers, and stakeholders. Technical Skills & Qualifications: Proficiency in Scala and Apache Spark for large-scale data processing. Working knowledge of Python for scripting and data transformation. Hands-on experience with Amazon Web Services (AWS), including: S3 AWS Glue Step Functions Experience working with MongoDB and structured/unstructured data. Familiarity with SQL and BDD frameworks. Experience with Terraform for infrastructure-as-code is a plus. Strong understanding of software development lifecycle (SDLC), from development to production deployment. Experience using tools like Git, Azure DevOps/TFS, and JIRA. Comfortable working in Agile/Scrum environments. Core Technologies: Scala Spark AWS Glue AWS Step Functions Maven Terraform Soft Skills: Strong problem-solving abilities and attention to detail. Ability to work independently and as part of a collaborative Agile team. Excellent written and verbal communication skills.

Source:  workable View Original Post

Location
North Carolina, USA
Show Map

workable

You may also like

Pickle
Senior Data Engineer
New York, NY, USA
Must be located in NYC (we are in office Monday - Thursday). Please apply by emailing recruiting@shoponpickle.com with “Senior Data Engineer” in the subject line and include the following: Share 1-3 (max) bullets on why you think you’re a standout applicant for this role. Share 1-2 (max) bullets summarizing an initiative you’re most proud of and the impact it drove (we love metrics!) Please attach your resume Pickle is a rental marketplace that aims to monetize the billions of underutilized assets sitting in consumers closets and brands inventory. Users can easily tap into shared closets within their community through flexible and/or on-demand delivery options. Our goal is to provide affordable and convenient access to quality items exactly when our users need them. We are starting with P2P clothing/accessories and expanding to other categories. Role Overview: As our first dedicated Senior Data Engineer, you will architect and scale the data hub that fuels every insight, experiment, and decision across the company. You’ll own the full lifecycle of our data platform—from ingestion and modeling to governance and observability—while partnering closely with Product, Growth, and Engineering to unlock self-serve analytics and advanced use cases. Requirements 6 + years of professional data-engineering experience, designing and running production-grade ELT/ETL pipelines Deep expertise with Snowflake or similar tool (warehouse design, performance tuning, security, and cost management) Strong SQL plus proficiency in Python (or TypeScript/Node.js) for data transformation and orchestration Hands-on experience with modern data-stack tooling (e.g., dbt, Airflow/Prefect, Fivetran/Hevo, Kafka/Kinesis) Solid grasp of data-governance concepts (catalogs, lineage, quality testing, RBAC, PII handling) Cloud infrastructure skills—AWS preferred (S3, ECS/Lambda, IAM, Secrets Manager, CloudWatch) Proven track record of setting up data infrastructure and analytics processes from the ground up in a high-growth or startup environment Excellent communication skills and the ability to translate business questions into scalable datasets and metrics Bonus: experience supporting experimentation frameworks, real-time/event-driven pipelines, or ML feature stores Key Responsibilities Own the data platform roadmap: design, build, and scale the pipelines and orchestration that feed Pickle’s central data hub Stand up analytics infrastructure: establish best-practice processes for data modeling, testing, CI/CD, and observability to ensure reliable, timely data Implement and Optimize Snowflake (or similar tool): tune performance, manage costs, and leverage advanced features (e.g., zero-copy clones, data sharing) Champion data governance & security: implement standards for documentation, data quality, lineage, and access control Enable self-service analytics: partner with analysts and stakeholders to expose trusted datasets and metrics in BI tools and experimentation platforms Collaborate across functions: work with Product, Growth, Operations, and Engineering to translate business requirements into data assets that drive conversion, retention, and liquidity Mentor and elevate the team: share best practices, review code, and influence architecture and tooling decisions as we scale Measure and monitor: define SLAs/SLOs for data freshness and accuracy; set up dashboards and alerts to maintain platform health Benefits Competitive compensation and equity Healthcare (Medical, Dental, Vision) Take what you need paid time off Meal Pal credits to cover the cost of lunch Stipend to help set up your desk and office environment Work directly with the founders and executive team Professional coaching, training, and development Grow with the company Pickle credits for our employees, we love when the team uses Pickle! Fun team events and company parties Company offsites Office space in NYC
Negotiable Salary
Programmer (Hillsboro)
2480 NE Century Blvd, Hillsboro, OR 97124, USA
American Precision Industries is seeking an experienced full-time CNC Mill Programmer for IMMEDIATE hire. HOURS: Day shift: M-Th, 10-hour shift, 6:00 am to 4:30 pm Salary: DOE ++Monthly bonus program is paid when company-wide monthly sales goal is reached. APPLY HERE:    https://americanprecisionindustries.com/employment-application/ or apply in person at: American Precision Industries 2480 NE Century Blvd. Hillsboro, OR 97124 PRIMARY PURPOSE:  Create programs using MasterCam to manufacture parts per blueprint specifications. REQUIREMENTS: * 5 yrs. experience with MasterCam programming software (2020 or above) * Experience with HEM (High Efficiency Machining) helpful * Experience with SolidCam helpful * Experience as a CNC mill machinist * Experience programming ferrous, non-ferrous metal and plastics * Skills to plan and program complicated parts according to blueprint * Strong knowledge of tolerance, blueprint reading and GD&T * Ability to work independently with little supervision * Excellent communication skills and willingness to work with others * Must have excellent work ethic and attendance * Attention to detail JOB DUTIES: * Program parts to be run on various Mori Seiki, DMG Mori and Doosan CNC Vertical, Horizontal, Horizontal, 5-Axis mills and Cells according to customer specifications. * Daily programming using MasterCam 2025 * Review blueprint and establishes a sequence of operations to produce product * Update programs with needed changes * Establish correct tools to use on each machine to manufacture the product * Design fixtures for CNC Mill runs * Produce set up sheet for operators * Determine size and availability of materials * Request materials for production through Purchasing * Daily communications with CAD Dept, operators and management to develop product as necessary * Review new jobs and determine priority * Review returned product and update documentation as necessary * Other duties as assigned We offer a full benefit package to employees working full-time at 30+ hrs. per week.  This includes 100% employer paid medical, dental and life insurance. Dependent coverage, short and long-term disability, AFLAC, and vision are voluntary at employee cost. 401k, paid holidays and paid vacation round out our benefit package. We are an Equal Opportunity Employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Negotiable Salary
TWG Global
Senior Machine Learning Engineer - AI Solutions
Santa Monica, CA, USA
At TWG Group Holdings, LLC (‘TWG Global’), we are at the cutting edge of innovation and business transformation, leveraging data and AI to manage a diverse portfolio that spans Investments, Securities, Insurance, Finance, Corporate Lending, Merchant Banking, as well as Sports, Media, and Entertainment. Data is one of our most valuable assets, and our commitment to AI allows us to transform that data into a powerful competitive advantage. We are pioneering a range of data and AI services that provide real-time intelligence to both our customers and workforce, enabling them to make informed decisions within interactive, cloud-native business applications.  As we evolve into an AI-first, model-driven, and cloud-native company, we prioritize responsible data and AI practices that protect our customers and workforce while ensuring compliance within a regulated environment. Our goal is to lead by example, acting with the highest standards of ethics, responsibility, and transparency. This commitment drives our development of cutting-edge AI/ML business applications across our diverse product and service offerings.  In this role, you will be at the heart of our data and analytics transformation, collaborating closely with management teams to achieve our strategic objectives and operational excellence. Our decentralized structure empowers each business unit to operate autonomously while benefiting from the strategic guidance and support of our central AI Solutions Group.  Through our pioneering partnerships with major data and AI vendors, you will have the opportunity to work on transformative projects that will revolutionize marketing, distribution, customer acquisition, and internal processes. These initiatives are designed to enhance productivity, reduce costs, and create high-demand products and services. The advanced data systems we develop will grant management unparalleled access to real-time performance data, enabling swift, informed decision-making and agile business adjustments.  Moreover, you will play a crucial role in leveraging our strategic relationships and equity positions in leading tech startups. These collaborations are not just about staying ahead of the competition—they are about creating true competitive advantages across our businesses. You will contribute to projects that are reshaping industries, from harnessing cloud-based supercomputing capabilities to driving key initiatives with major universities that are shaping the future of technology.  At TWG Global, your work will directly contribute to our ambitious goal of generating sustained growth and superior returns, with expected annual equity returns exceeding 20%, including growing dividends. Join us as we push the boundaries of technology and innovation, delivering significant value across our portfolio while making a profound impact on the industries we serve.  As a Senior Machine Learning Engineer, you’ll work alongside seasoned ML engineers and data scientists to design, build, and deploy machine learning systems that drive real business value. Reporting to the Executive Director of AI Science, you’ll gain hands-on experience contributing to production-ready AI applications, including predictive modeling, optimization, and monitoring tools used across the enterprise.  This is a high-growth opportunity for someone with early industry experience (or strong academic grounding) in machine learning, eager to deepen their technical expertise and grow within a dynamic AI team building at the frontier of applied ML.  Responsibilities:  Contribute to the development and deployment of ML models and pipelines across business-critical domains such as financial services and insurance.  Support productionization efforts, including model packaging, integration, deployment, and monitoring for model performance, drift, and reliability.  Conduct exploratory data analysis and feature engineering to support experimentation, prototyping, and model improvements.  Collaborate with senior engineers and data scientists to refine models and improve scalability, robustness, and accuracy.  Participate in building internal tools and infrastructure that enhance model training, testing, and monitoring workflows.  Clean, transform, and prepare datasets from diverse sources, ensuring data quality and consistency across ML pipelines.  Translate findings into clear, actionable insights, and communicate results effectively to technical and non-technical stakeholders.  Requirements 3+ years of experience building and deploying ML models in production environments, including hands-on experience with model monitoring and diagnostics.  Solid understanding of machine learning fundamentals, statistical methods, and data science workflows.  Experience with multimodal, generative AI, or large language models (e.g., LLMs, diffusion models) is a strong plus.  Exposure to model lifecycle management tools (e.g., MLflow, Weights & Biases) and production monitoring systems (e.g., Prometheus, Grafana).  Proficiency in Python and familiarity with core ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch).  Experience with data manipulation and analysis using tools like pandas, NumPy, and SQL.  Familiarity with data visualization tools (e.g., matplotlib, seaborn, Plotly) to support storytelling and analysis.  Excellent problem-solving skills, eagerness to learn, and comfort operating in fast-paced, evolving environments.  Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related quantitative field.  Strong written and verbal communication skills, with the ability to clearly explain technical concepts and results to diverse audiences.  Preferred Experience:  Hands-on experience with Palantir platforms (e.g., Foundry, AIP, Ontology) - including developing, deploying, and integrating machine learning solutions within Palantir’s data and AI ecosystem.  Benefits Position Location  This position is based out of our Santa Monica, CA office. Consideration for a different working location will be considered on a case-by-case basis.   Compensation The base pay for this position is $190,000. A bonus will be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits.  TWG is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
$190,000
Cookie
Cookie Settings
© 2025 Servanan International Pte. Ltd.