




Summary: Develop and deploy advanced conversational and generative AI models for a Voice AI platform, optimizing systems for real-time performance and building MLOps pipelines. Highlights: 1. Develop and deploy advanced conversational and generative AI models 2. Optimize models and system architecture for real-time, low-latency performance 3. Architect and maintain the full MLOps pipeline for AI models Skwid Inc. (dba Salient) has multiple openings for the following position in San Francisco, California. To apply, email resume and cover letter to HR@trysalient.com and reference job title. EOE. Principals only. Full-Stack Engineer (Wage Offer: $170,000/yr): Develop and deploy advanced conversational and generative Artificial Intelligence (AI) models for a Voice AI platform. Create automated frameworks to continuously measure model accuracy, voice quality, and performance. Optimize models and system architecture for real-time, low-latency performance at a massive scale, supporting millions of daily users and hundreds of millions of calls. Architect and maintain the full MLOps pipeline, including CI/CD processes for AI models and a system for the AI to learn from human preferences. Support critical infrastructure, including a dialing system and voice orchestration layers. Implement disaster recovery and high availability protocols. Establish comprehensive observability using Signoz, Prometheus, and Grafana to monitor system health and performance. Assist in the development of new AI products, such as an AI-based quality assurance solution for the auto industry, and scale these products to millions of daily calls in collaboration with co-founders. Define the technical vision and roadmap for AI-powered products. REQUIREMENTS: Master’s degree or foreign equivalent in Computer Science, Software Engineering, or related technical field. 2 years of experience as a Software Engineer, Software Developer, or related occupation. SKILLS: Must have: 1. Experience with a programming language (Python or PyTorch). 2. Experience with conversational AI, generative models, and fine-tuning large language models (LLMs). 3. Experience with Reinforcement Learning from Human Feedback (RLHF) and developing automated evaluation frameworks. 4. Experience building and managing full MLOps pipelines, including deploying machine learning models, implementing CI/CD for AI, and developing strategies for model versioning and monitoring model drift. 5. Experience optimizing distributed systems for high throughput and low latency. 6. Experience with a cloud platform (AWS, GCP, or Azure), building high-availability and fault tolerant systems, and implementing disaster recovery solutions. 7. Experience with telephony and voice orchestration layers. 8. Experience establishing observability stacks using industry-standard tools, specifically Signoz, Prometheus, or Grafana.


