Browse
···
Log in / Register

Data Engineer III

Negotiable Salary

iSoftTek Solutions Inc

Seattle, WA, USA

Favourites
Share

Description

Position: Data Engineer III  Location: Seattle, WA 98121  Duration: 17 Months        Job Type: Contract          Work Type:  Onsite        Job Description:      The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure.  Responsibilities  As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments.  You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data.  Design, build, and maintain scalable, reliable, and reusable data pipelines and infrastructure that support analytics, reporting, and strategic decision-making  You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards  You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.  Explore source systems, data flows, and business processes to uncover opportunities, ensure data accuracy and completeness, and drive improvements in data quality and usability.  Required Skills & Experience  7+ years of related experience.  Experience with data modeling, warehousing and building ETL pipelines  Strong experience with SQL  Experience in at least one modern scripting or programming language, such as Python, Java.  Strong analytical skills, with the ability to translate business requirements into technical data solutions.  Excellent communication skills, with the ability to collaborate across technical and business teams.  A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies.   Preferred  Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions  Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)  3+ years of reparation of data for direct use in visualization tools like Tableau experience  KPI: Meet requirements, how they action solutions, etc.:  Leadership Principles:   Deliver Results  Dive Deep  Top 3 must-have hard skills  Strong experience with SQL  Experience in at least one modern scripting or programming language, such as Python, Java.  Experience with data modeling, warehousing and building ETL pipelines 

Source:  workable View Original Post

Location
Seattle, WA, USA
Show Map

workable

You may also like

DMV IT Service
Globality BID- .NET Python Full Stack DeveloperWLK TX
Roanoke, TX, USA
Job Title: .NET Python Full Stack Developer  Location: Roanoke, TX Employment Type: Contract About Us:  DMV IT Service LLC is a trusted IT consulting firm, established in 2020. We specialize in optimizing IT infrastructure, providing expert guidance, and supporting workforce needs with top-tier staffing services. Our expertise spans system administration, cybersecurity, networking, and IT operations. We empower our clients to achieve their technology goals with a client-focused approach that includes online training and job placements, fostering long-term IT success. Job Purpose: The developer will focus on building robust RESTful APIs and backend systems using .NET, Python, and Java technologies. This includes writing clean and maintainable code, participating in architecture discussions, and ensuring seamless integration and performance of distributed systems. The role also involves contributing to cloud deployments and supporting continuous integration and delivery pipelines. Requirements Key Responsibilities Design, build, and maintain RESTful APIs using Java, integrated into a microservices environment Develop scalable backend systems using .NET and Python Collaborate with product owners, QA, and other developers to gather requirements and translate them into technical solutions Build and maintain web applications using Angular, JavaScript/TypeScript, Node.js, HTML, and CSS Work within Agile (SCRUM) teams to deliver high-quality features and enhancements Utilize Azure and AWS cloud platforms for hosting and deploying services Participate in DevOps practices, including CI/CD pipeline maintenance and automation Perform unit testing and integration testing to ensure code reliability Write clean, reusable, and well-documented code in line with industry best practices Maintain version control using tools like Git Create and maintain technical documentation for developed services Required Skills & Experience 5+ years of experience in application development and data engineering Proficient in backend development using .NET and Python Strong experience developing APIs using Java (preferably with Spring Boot) Hands-on experience with frontend technologies such as Angular, JavaScript/TypeScript, Node, HTML, and CSS Solid understanding of REST API architecture and secure design principles Experience with Azure and/or AWS cloud services Proficiency in SQL and working with relational databases like PostgreSQL or MySQL Strong skills in version control tools such as Git Experience with DevOps, CI/CD pipelines, and automated deployment practices Familiarity with Agile methodologies (Scrum) Background in financial services is a plus Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
Negotiable Salary
Tiger Analytics
Power BI Developer
New Jersey, USA
Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world. The right candidate will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business challenges. We are looking for someone who can bring their vision to the table and implement positive change in taking the company's data analytics to the next level. Requirements Bachelor’s degree in Computer Science or related field 6 + years of experience in a Business Intelligence space Strong experience on Power BI development Knowledge of SQL is a must Knowledge of Snowflake is a must SAP experience is a plus Databricks or Azure Synapse experience preferred Supply Chain experience preferred Experience and knowledge with Data Warehouse ETL process using SSIS packages is preferred Experience with tools and concepts related to data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, structured and unstructured data Coordinate with different stakeholders to gather requirements and propose KPIs based on requirements Consulting experience is highly preferred Benefits This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.
Negotiable Salary
Qode
Senior Big Data Engineer
New York, NY, USA
Role Description: Senior Big Data Engineer – Scala/Spark Location: [CLT/NY/NJ/or Remote/Hybrid] Type: [Full-time/Contract] Experience Level: Senior level only   Role Overview We are looking for a highly skilled Senior Big Data Engineer with strong expertise in Scala and Apache Spark to support our data engineering team. The ideal candidate should be able to read and refactor existing Scala Spark code, develop enhancements, create robust unit tests, and help migrate code into a modern cloud-native environment.   Key Responsibilities: Analyze and refactor existing Scala Spark code to support migration to new environments. Write and enhance Scala Spark code with a focus on performance and maintainability. Develop comprehensive unit tests to ensure code quality and stability. Work with data stored in Amazon S3 (CSV and Parquet formats). Integrate and manipulate data from MongoDB. Independently set up environments to validate and test code changes. Execute both manual and automated testing procedures. Collaborate within an Agile team and contribute to sprint planning, story grooming, and reviews. Maintain clear and proactive communication with team members, project managers, and stakeholders. Technical Skills & Qualifications: Proficiency in Scala and Apache Spark for large-scale data processing. Working knowledge of Python for scripting and data transformation. Hands-on experience with Amazon Web Services (AWS), including: S3 AWS Glue Step Functions Experience working with MongoDB and structured/unstructured data. Familiarity with SQL and BDD frameworks. Experience with Terraform for infrastructure-as-code is a plus. Strong understanding of software development lifecycle (SDLC), from development to production deployment. Experience using tools like Git, Azure DevOps/TFS, and JIRA. Comfortable working in Agile/Scrum environments. Core Technologies: Scala Spark AWS Glue AWS Step Functions Maven Terraform Soft Skills: Strong problem-solving abilities and attention to detail. Ability to work independently and as part of a collaborative Agile team. Excellent written and verbal communication skills.
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.