Browse
···
Log in / Register

Platform Engineer AI Tool & Integration

Negotiable Salary

Axiom Software Solutions Limited

Coral Gables, FL, USA

Favourites
Share

Description

Platform Engineer - AI Tool & Integration Position Overview: The Data Analytics & AI department at RCG is seeking a highly skilled and experienced Software & Platform Engineer to join our team. This pivotal role requires a strong technical background in AI tooling, data platform architecture, cloud computing, and big data technologies. The successful candidate will be responsible for all tooling and integration with GenAI LLM, maintaining our Azure platform with OpenAI, and leveraging Databricks Mosaic AI. This role will be instrumental in driving innovation, ensuring seamless integration, and optimizing our AI and data platforms to meet the evolving needs of the business. Key Responsibilities: • Design, develop, and maintain tooling and integration for GenAI LLM and other AI models. • Manage and optimize our Azure platform, ensuring seamless integration with OpenAI and Databricks Mosaic AI. • Collaborate with cross-functional teams to identify and implement innovative AI solutions to enhance our platform capabilities. • Stay up to date with the latest advancements in AI and machine learning technologies to drive continuous improvement and innovation. • Develop and implement best practices for AI model deployment & scaling. • Design and execute integration strategies for incorporating large language models (LLMs) and CoPilot technologies with existing business platforms. • Assess and recommend suitable LLM and CoPilot technologies that align with business needs and technical requirements. • Conduct feasibility studies and proof-of-concepts to validate the integration of new tools and technologies. • Keep abreast of the latest advancements in LLM, CoPilot, and related technologies to identify opportunities for further innovation and improvement. • Understand and leverage MS CoPilot and MS CoPilot Studio for enhanced productivity and collaboration within the development team. • Integrate MS CoPilot tools into existing workflows and ensure seamless integration with other systems and applications used by the team. • Work closely with product managers, data scientists, and other stakeholders to gather requirements and ensure successful integration of LLM and CoPilot technologies. Requirements: • Bachelor’s or Master’s degree in computer science, Engineering, or a related field. • Proven experience in software/system/platform engineering, with a focus on AI tooling and integration. • Strong expertise in working with Azure, including managing and integrating AI services such as OpenAI and Databricks Mosaic AI. • Proficiency in programming languages such as Python, Java, or C++. • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch). • Solid understanding of software development methodologies, including Agile and DevOps practices and tools for continuous integration and deployment. • Understanding of data security best practices and compliance requirements in software development and integration • Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. • Strong communication and collaboration skills. Preferred Qualification: • Strong proficiency in one or more programming languages such as Python, Java, C#, or JavaScript. • Experience with large language models (LLMs) and familiarity with tools like OpenAI GPT, Google BERT, or similar. • Hands-on experience with Databricks for data engineering, data science, or machine learning workflows. • Proficiency in using Databricks for building and managing data pipelines, ETL processes, and real-time data processing. • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch). • Hands-on experience with Microsoft CoPilot and CoPilot Studio. • Proficiency in working with APIs, microservices architecture, and web services (REST/SOAP). • Familiarity with cloud platforms such as AWS, Azure, or Google Cloud, and their integration services. • Knowledge of database systems, both SQL and NoSQL (e.g., MySQL, MongoDB). • Knowledge of natural language processing (NLP) and large language models (LLMs). • Previous experience in a similar role within a technology-driven organization. • Certifications in Azure, AI, or related areas.

Source:  workable View original post

Location
Coral Gables, FL, USA
Show map

workable

You may also like

Workable
Sr GCP Engineer
Infrastructure Automation & Management:   Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using GCP services. Automate cloud infrastructure provisioning, scaling, and monitoring using Infrastructure as Code (IaC) tools such as Terraform or Google Cloud Deployment Manager. Manage and optimize GCP resources such as Compute Engine, Kubernetes Engine, Cloud Functions, and BigQuery to support development teams. CI/CD Pipeline Management: Build, maintain, and enhance continuous integration and continuous deployment (CI/CD) pipelines to ensure seamless and automated code deployment to GCP environments. Integrate CI/CD pipelines with GCP services like Cloud Build, Cloud Source Repositories, or third-party tools like Jenkins Ensure pipelines are optimized for faster build, test, and deployment cycles. Monitoring & Incident Management: Implement and manage cloud monitoring and logging solutions using Dynatrace and GCP-native tools like Stackdriver (Monitoring, Logging, and Trace). Monitor cloud infrastructure health and resolve performance issues, ensuring minimal downtime and maximum uptime. Set up incident management workflows, implement alerting mechanisms, and create runbooks for rapid issue resolution.   Security & Compliance:   Implement security best practices for cloud infrastructure, including identity and access management (IAM), encryption, and network security. Ensure GCP environments comply with organizational security policies and industry standards such as GDPR/CCPA, or PCI-DSS. Conduct vulnerability assessments and perform regular patching and system updates to mitigate security risks. Collaboration & Support: Collaborate with development teams to design cloud-native applications that are optimized for performance, security, and scalability on GCP. Work closely with cloud architects to provide input on cloud design and best practices for continuous integration, testing, and deployment. Provide day-to-day support for development, QA, and production environments, ensuring availability and stability. Cost Optimization: Monitor and optimize cloud costs by analyzing resource utilization and recommending cost-saving measures such as right-sizing instances, using preemptible VMs, or implementing auto-scaling.   Tooling & Scripting: Develop and maintain scripts (using languages like Python, Bash, or PowerShell) to automate routine tasks and system operations. Use configuration management tools like Ansible, Chef, or Puppet to manage cloud resources and maintain system configurations. Required Qualifications & Skills: Experience: 3+ years of experience as a DevOps Engineer or Cloud Engineer, with hands-on experience in managing cloud infrastructure. Proven experience working with Google Cloud Platform (GCP) services such as Compute Engine, Cloud Storage, Kubernetes Engine, Pub/Sub, Cloud SQL, and others. Experience in automating cloud infrastructure with Infrastructure as Code (IaC) tools like Terraform, Cloud Deployment Manager, or Ansible.   Technical Skills: Strong knowledge of CI/CD tools and processes (e.g., Jenkins, GitLab CI, CircleCI, or GCP Cloud Build). Proficiency in scripting and automation using Python, Bash, or similar languages. Strong understanding of containerization technologies (Docker) and container orchestration tools like Kubernetes. Familiarity with GCP networking, security (IAM, VPC, Firewall rules), and monitoring tools (Stackdriver). Cloud & DevOps Tools: Experience with Git for version control and collaboration. Familiarity with GCP-native DevOps tools like Cloud Build, Cloud Source Repositories, Artifact Registry, and Binary Authorization. Understanding of DevOps practices and principles, including Continuous Integration, Continuous Delivery, Infrastructure as Code, and Monitoring/Alerting.   Security & Compliance: Knowledge of security best practices for cloud environments, including IAM, network security, and data encryption. Understanding of compliance and regulatory requirements related to cloud computing (e.g., GDPR/CCPA, HIPAA, or PCI). Soft Skills: Strong problem-solving skills with the ability to work in a fast-paced environment. Excellent communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders. Team-oriented mindset with the ability to work collaboratively with cross-functional teams. Certifications (Preferred): Google Professional Cloud DevOps Engineer certification (preferred). Other GCP certifications such as Google Professional Cloud Architect or Associate Cloud Engineer are a plus. DevOps certifications like Certified Kubernetes Administrator (CKA) or AWS/GCP DevOps certification are advantageous.
Springfield, MO, USA
Negotiable Salary
Craigslist
Remote Software Developer
We are searching for ambitious individuals who want to begin careers in technology. If you want practical coding skills, project-based experience, and readiness for entry-level developer roles, this program is the right fit. This flexible remote program offers nearly 900 hours of guided training and projects. Study part-time or full-time while mastering programming languages, developer tools, and workflows—all while building a strong résumé and portfolio. 🖥️ Technology & Programming Fundamentals -Learn how computers, networks, browsers, and the internet work -Study algorithms, data structures, number systems, and security concepts -Practice Python scripting, command line basics, and flowchart logic 💻 Web & Front-End Development -Build and style websites with HTML5, CSS3, and Bootstrap -Create interactive features using JavaScript, jQuery, and React.js -Practice responsive design techniques and modern layouts 🗄️ Back-End & Database Development -Design and maintain databases with SQL and SQL Server -Perform CRUD operations and relational queries -Build back-end systems using Python (Django) and C# (.NET Core) 🧑‍💻 Programming Languages & Tools -Master C#, Python, JavaScript, HTML, CSS, SQL, and more -Work with Git, GitHub, Visual Studio, and Team Foundation Server -Apply collaboration and version control best practices 🧪 Capstone Projects -Complete two real-world projects (Python + C#) -Build portfolio-ready apps using Agile, Scrum, and DevOps -Practice debugging, teamwork, and problem-solving 🧰 Career Preparation -Learn résumé and cover letter development -Practice whiteboard and technical interviews -Prepare for entry-level developer positions 🚀 No prior background required. Remote applicants encouraged. Take the first step into a career in tech. 👉 Apply today: https://softwaredevpros.online/
84,21 Southfield Rd, Detroit, MI 48228, USA
$30/hour
Workable
Machine Learning Engineer
Tiger Analytics is an advanced analytics consulting firm. We are the trusted analytics partner for several Fortune 100 companies, enabling them to generate business value from data. Our consultants bring deep expertise in Data Science, Machine Learning, and AI. Our business value and leadership have been recognized by various market research firms, including Forrester and Gartner. Are you a Machine Learning Engineer with expertise in Google Cloud Platform (GCP) and Vertex AI? We are looking for two talented professionals to join our team in a fully remote, onshore capacity. If you thrive in building and deploying scalable AI solutions, this role is for you! What You'll Do: Collaborate with cross-functional teams to design and deploy ML models. Develop reusable, scalable code for AI/ML applications. Leverage GCP services to build end-to-end machine learning pipelines. Optimize models for performance and scalability using Vertex AI. Requirements Key Requirements: Google Cloud Platform (GCP) Experience: Strong proficiency in GCP services, including data engineering and machine learning tools. Google Vertex AI Expertise: Hands-on experience with model training, deployment, and optimization using Vertex AI. Model Development & Deployment: Proven ability to design, build, and productionize machine learning models. API Development: Skilled in developing robust APIs for seamless integrations. Python Programming with CI/CD: Experience in Python-based applications and implementing CI/CD pipelines. Why Join Us? Work remotely while contributing to cutting-edge projects. Collaborate with a dynamic team passionate about AI/ML innovation. Opportunity to work with the latest Google Cloud technologies. Ready to take the next step? Apply now and be part of a team that’s shaping the future of AI! Benefits Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility.
Dallas, TX, USA
Negotiable Salary
Craigslist
Startup Opportunity: Website Developer & Project Manager (BURLINGTON)
We’re an Skagit County established "Wholesale Supply" company in need of a tech division startup in the final stages of launching our 15 ecommerce websites and promotions platforms (national, data-driven marketplace technology connecting with businesses and customers). The foundations are already built and we’re seeking top-tier experts with vast e-commerce and tech-savy skills to take over, refine, and drive this project to market success. We estimate $2-10 million in revenue from online sales. REQUIREMENTS Complete Web Development Expertise: Ability to quickly understand and take ownership of an existing back end, front end, codebase from overseas developers and implement AI-Driven Feature Knowledge: Experience integrating AI tools and personalization algorithms. Project Leadership: Comfort managing offshore development teams to outsource specialized features while ensuring top-quality delivery. Technical Versatility: Strong knowledge of scalable cloud infrastructure, backend APIs, frontend frameworks, and payment integrations. Strategic Vision: Someone excited to grow with a startup, bring fresh ideas, and eventually take on equity/percentage ownership. RESPONSIBILITIES Take over and transform older live, near-complete web platforms to the newly developed platforms and/or implement them online Work closely with the original developer (available for initial support) to ensure a smooth handover. Oversee new feature development and performance optimization. Help shape and execute product strategy while scaling the platform. REQUIREMENTS Proven track record. Strong technical leadership and communication skills. Familiarity with all ecommerce tools for personalization and customer engagement. Experience managing or working with outsourced development teams. Must be Whatcom/Skagit based or able to collaborate on Pacific Time. P/T or F/T
504 N Burlington Blvd, Burlington, WA 98233, USA
Negotiable Salary
Workable
Systems Engineer
NetX is a leading provider of DAM software for museums, heritage, and businesses from around the world. We’re a smaller company located in Portland, Oregon. We’re a passionate, collaborative team that believes in building not just software, but also relationships with our customers. This endeavor started more than 20 years ago, and continues to grow, thrive and excel. We are currently looking for the right person to grow with us as we expand our customer base. Visit www.netx.net to learn more about us. Objective NetX’s Systems Engineer looks at what's going on in our systems, and figures out how to fix it, which sometimes means designing new solutions from scratch. You will be part of a talented team of engineers that demonstrate superb technical competency, delivering mission critical infrastructure and ensuring the highest levels of availability, performance and security. In addition, you are responsible for providing advice regarding the appropriate hardware and/or software to ensure our SaaS platform remains robust and performant. We're looking for a team player to be a part of our dynamic, flexible environment, where we adhere to an approach inspired by the Shape Up methodology. We encourage self-organization and a collaborative atmosphere where team members from various functions work together to plan, prioritize, and execute tasks effectively. We'd love to hear from you if you're ready to contribute to our forward-thinking approach. Responsibilities ● Manage and monitor all installed systems and infrastructure. ● Install, configure, test and maintain systems, application software and system management tools. ● Proactively ensure the highest levels of systems and infrastructure availability. ● Monitor and test application performance for potential bottlenecks, identify possible solutions, and work with developers to implement those fixes. ● Maintain security, backup, and redundancy strategies. ● Write and maintain custom (e.g. Ansible) scripts to increase system efficiency and lower the human intervention time on any tasks. ● Participate in the design of information and operational support systems. ● Liaise with vendors for problem resolution. ● When addressing an issue, prioritize seeking a solution that resolves the problem in a more sustainable, long-term manner. ● Strive for automation. Actively working to implement automated processes and systems within the infrastructure. ● Actively search for areas where recurring reactivity can be transformed into proactive solutions. ● On-call rotation — tier 1 and tier 2 (See: NetX Incident Response Process). ● Assist Support on Ops and Platform related technical issues (Tier 2). Requirements ● BS/MS degree in Computer Science, Engineering or a related subject. ● Proven working experience in installing, configuring and troubleshooting UNIX /Linux based environments. ● Solid experience in the administration and performance tuning of application stacks (e.g.,Tomcat, Apache, NGINX). ● Solid Cloud experience, preferably in AWS. ● Experience with virtualization and containerization (e.g., Docker, Kubernetes, VMware, Virtual Box). ● Experience with monitoring and observability systems (e.g., Nagios/Icigna, Prometheus, Grafana). ● Experience with automation software (e.g., Ansible). ● Solid scripting skills (e.g., shell scripts, Python). ● Solid networking knowledge (OSI network layers, TCP/IP). Benefits We offer a competitive salary along with a benefits package that includes: ● Medical, Dental, and Vision Insurance ● Life and Short/Long Term Disability Insurance ● 401k Retirement with Employer Match ● PTO ● Paid Holidays ● Commuting Expense Assistance ● Flexible working arrangements ● Friendly dogs are welcome in the office!
Portland, OR, USA
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.