Browse
···
Log in / Register

Sr GCP Engineer

Negotiable Salary

Tek Spikes

Springfield, MO, USA

Favourites
Share

Description

Infrastructure Automation & Management:   Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using GCP services. Automate cloud infrastructure provisioning, scaling, and monitoring using Infrastructure as Code (IaC) tools such as Terraform or Google Cloud Deployment Manager. Manage and optimize GCP resources such as Compute Engine, Kubernetes Engine, Cloud Functions, and BigQuery to support development teams. CI/CD Pipeline Management: Build, maintain, and enhance continuous integration and continuous deployment (CI/CD) pipelines to ensure seamless and automated code deployment to GCP environments. Integrate CI/CD pipelines with GCP services like Cloud Build, Cloud Source Repositories, or third-party tools like Jenkins Ensure pipelines are optimized for faster build, test, and deployment cycles. Monitoring & Incident Management: Implement and manage cloud monitoring and logging solutions using Dynatrace and GCP-native tools like Stackdriver (Monitoring, Logging, and Trace). Monitor cloud infrastructure health and resolve performance issues, ensuring minimal downtime and maximum uptime. Set up incident management workflows, implement alerting mechanisms, and create runbooks for rapid issue resolution.   Security & Compliance:   Implement security best practices for cloud infrastructure, including identity and access management (IAM), encryption, and network security. Ensure GCP environments comply with organizational security policies and industry standards such as GDPR/CCPA, or PCI-DSS. Conduct vulnerability assessments and perform regular patching and system updates to mitigate security risks. Collaboration & Support: Collaborate with development teams to design cloud-native applications that are optimized for performance, security, and scalability on GCP. Work closely with cloud architects to provide input on cloud design and best practices for continuous integration, testing, and deployment. Provide day-to-day support for development, QA, and production environments, ensuring availability and stability. Cost Optimization: Monitor and optimize cloud costs by analyzing resource utilization and recommending cost-saving measures such as right-sizing instances, using preemptible VMs, or implementing auto-scaling.   Tooling & Scripting: Develop and maintain scripts (using languages like Python, Bash, or PowerShell) to automate routine tasks and system operations. Use configuration management tools like Ansible, Chef, or Puppet to manage cloud resources and maintain system configurations. Required Qualifications & Skills: Experience: 3+ years of experience as a DevOps Engineer or Cloud Engineer, with hands-on experience in managing cloud infrastructure. Proven experience working with Google Cloud Platform (GCP) services such as Compute Engine, Cloud Storage, Kubernetes Engine, Pub/Sub, Cloud SQL, and others. Experience in automating cloud infrastructure with Infrastructure as Code (IaC) tools like Terraform, Cloud Deployment Manager, or Ansible.   Technical Skills: Strong knowledge of CI/CD tools and processes (e.g., Jenkins, GitLab CI, CircleCI, or GCP Cloud Build). Proficiency in scripting and automation using Python, Bash, or similar languages. Strong understanding of containerization technologies (Docker) and container orchestration tools like Kubernetes. Familiarity with GCP networking, security (IAM, VPC, Firewall rules), and monitoring tools (Stackdriver). Cloud & DevOps Tools: Experience with Git for version control and collaboration. Familiarity with GCP-native DevOps tools like Cloud Build, Cloud Source Repositories, Artifact Registry, and Binary Authorization. Understanding of DevOps practices and principles, including Continuous Integration, Continuous Delivery, Infrastructure as Code, and Monitoring/Alerting.   Security & Compliance: Knowledge of security best practices for cloud environments, including IAM, network security, and data encryption. Understanding of compliance and regulatory requirements related to cloud computing (e.g., GDPR/CCPA, HIPAA, or PCI). Soft Skills: Strong problem-solving skills with the ability to work in a fast-paced environment. Excellent communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders. Team-oriented mindset with the ability to work collaboratively with cross-functional teams. Certifications (Preferred): Google Professional Cloud DevOps Engineer certification (preferred). Other GCP certifications such as Google Professional Cloud Architect or Associate Cloud Engineer are a plus. DevOps certifications like Certified Kubernetes Administrator (CKA) or AWS/GCP DevOps certification are advantageous.

Source:  workable View original post

Location
Springfield, MO, USA
Show map

workable

You may also like

Craigslist
Remote Software Development Career 🧑‍💻
We are looking for ambitious individuals who want to begin their careers in technology. If you are ready to learn coding, work on hands-on projects, and prepare for entry-level jobs, this program is built for you. This program is remote and flexible, with nearly 900 hours of guided training. Study part-time or full-time while learning industry-standard programming languages, tools, and workflows—and create a résumé and portfolio employers notice. 🖥️ Technology & Programming Fundamentals -Learn the basics of computers, networks, browsers, and the internet -Study algorithms, security concepts, number systems, and data structures -Practice Python scripting, command line use, and flowchart design 💻 Web & Front-End Development -Build websites using HTML5, CSS3, and Bootstrap -Create dynamic apps with JavaScript, jQuery, and React.js -Practice responsive design and clean layouts 🗄️ Back-End & Database Development -Design databases with SQL and SQL Server -Perform CRUD processes and relational queries -Build server-side systems with Python (Django) and C# (.NET Core) 🧑‍💻 Programming Languages & Tools -Master seven languages including C#, Python, JavaScript, HTML, CSS, and SQL -Use Git, GitHub, Visual Studio, and Team Foundation Server -Practice industry-level version control and collaboration 🧪 Capstone Projects -Complete two professional projects (Python + C#) -Build portfolio-ready work using Agile, Scrum, and DevOps -Practice debugging, teamwork, and coding workflows 🧰 Career Preparation -Learn résumé writing and cover letter tips -Practice whiteboard and technical interviews -Prepare for junior developer opportunities 🚀 No experience required. Remote applicants welcome. Take the first step into your future in technology. 👉 Apply now: https://softwaredevpros.online/
345C+CV Mesa Oeste, Albuquerque, NM, USA
$30/hour
Workable
R&D AI Software Engineer / End-to-End Machine Learning Engineer / RAG and LLM
About Pathway Pathway is building LiveAI™ systems that think and learn in real time as humans do. Our mission is to deeply understand how and why LLMs work, fundamentally changing the way models think. The team is made up of AI luminaries. Pathway's CTO, Jan Chorowski, co-authored papers with Geoff Hinton and Yoshua Bengio and was one of the first people to apply attention to speech. Our CSO, Adrian Kosowski, received his PhD in Theoretical Computer Science at the age of 20 and made significant contributions across numerous scientific fields, including AI and quantum information. He also served as a professor and a coach for competitive programmers at Ecole Polytechnique. The team also includes numerous world's top scientists and competitive programmers, alongside seasoned Silicon Valley executives. Pathway has strong investor backing. To date, we have raised over $15M; our latest reported round was our seed. Our offices are located in Palo Alto, CA, as well as Paris, France, and Wroclaw, Poland. The Opportunity We are currently searching for 2 ML/AI Engineers with a solid software engineering backbone who are able to prototype, evaluate, improve, and productionize end-to-end Machine Learning projects with enterprise data. For representative examples of the type of projects you would be expected to create or deliver, see our AI pipelines templates. If you would consider it would be fun to create a hybrid Vector/Graph index that beats the state of the art on a RAG benchmark, to deliver a working AI pipeline to a client in a critical industry, or to pre-process datasets in a way which would boost LLM accuracy in inference & training - this is the job for you! You Will help design experimental end-to-end ML/AI pipelines contribute to addressing new use cases, beyond state of the art improve/adapt AI pipelines for production, working directly with client data (often live data streams) invent ways to pre-process data sources and perform tweaks (reranking, model paramater configuration...) for optimal performance of AI pipelines. design benchmark tasks and perform experiments. build unit tests and implement model monitoring. contribute high-quality production code to our developer frameworks, used by thousands of developers. help to pre-process data sets for LLM training. The results of your work will play a crucial role in the success of both our developer offering and client delivery. Requirements Cover letter It's always a pleasure to say hi! If you could leave us 2-3 lines, we'd really appreciate that. You Are A graduate of a 4+-year university degree in Computer Science, where you have received A-grades in both foundational courses (Algorithms, Computational Complexity, Graph Theory,...) and Machine Learning courses. Passionate about delivering high-quality code and solutions that work. Good with data & engineering innovation in practice - you know how to put things together so that they don't blow up. Experienced at hands-on Machine Learning / Data Science work in the Python stack (notebooks, etc.). Experienced with model monitoring, git, build systems, and CI/CD. Respectful of others Curious of new technology - an avid reader of HN, arXiv feeds, AI digests, ... Fluent in English Bonus Points Successful track-record in algorithms (ICPC / IOI), data science contests (Kaggle), or a HuggingFace leaderboard. Showing a code portfolio. PhD in Computer Science. Authoring a paper at major Machine Learning conference. You like playing/tinkering with new tools in the LLM stack. You are already a part of the Pathway community, or have been recognized for your work in one of our bootcamps. Why You Should Apply Join an intellectually stimulating work environment. Be a technology innovator that makes a difference: your code gets delivered to a community of thousands of developers, and to clients processing billions of records of data. Work in one of the hottest AI startups, that believes in impactful research and foundational changes. Benefits Type of contract: Full-time, permanent Preferable joining date: January 2025. The positions are open until filled – please apply immediately. Compensation: competitive base salary (80th to 99th percentile) based on profile and location + Employee Stock Option Plan + possible bonuses if working on client projects. The stated lower band of EUR 72k/ USD 75k for the salary base concerns senior candidates; mid-senior or mid candidates may be considered with adapted salary bands. Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. As a general rule, permanent residence will be required in the EU, UK, US, or Canada. (Note for candidates based in India: We are proud to be part of the current Inter-IIT as well as a partner of ICPC India. Top IIT/IIIT graduates are more than welcome to apply regardless of current location.) If you meet our broad requirements but are missing some experience, don’t hesitate to reach out to us.
Palo Alto, CA, USA
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.