Browse
···
Log in / Register

Java Solutions Architect

Negotiable Salary

Qode

Texas, USA

Favourites
Share

Description

Job Title:  Java Solutions Architect Job Location: Dallas, TX / Pittsburgh, PA Job Summary: We are seeking a highly skilled Java Solutions Architect with a strong background in systems integration, application development, and enterprise architecture. The ideal candidate will have hands-on experience with J2EE application servers, SOA technologies, and enterprise tools such as TIBCO and Informatica. This role requires a solid understanding of system design, software engineering principles, and a strategic mindset to support scalable and maintainable solutions across enterprise platforms. Experience: ·      B.S. in Computer Science, Information Systems, or a related field, or equivalent combination of education and experience. ·      15+ years of experience in Java development, with a strong focus on enterprise application architecture and solution design. ·      4–6 years of progressive IS/business experience, specifically in programming, systems engineering, and/or analysis. ·      Broad understanding of enterprise information systems and system architecture principles. ·      2+ years of hands-on experience with: o  Deployment and administration of J2EE application servers (WebSphere, JBoss, Tomcat, Spring) o  TIBCO suite of tools (e.g., Business Works, EMS) o  Informatica tools (PowerCenter, IDQ) ·      2+ years of experience with integration technologies, including: o  SOA (Service-Oriented Architecture) o  ESB (Enterprise Service Bus) o  JMS messaging systems o  XML and Web Services (SOAP/REST)

Source:  workable View Original Post

Location
Texas, USA
Show Map

workable

You may also like

Workabale
Agentic AI Architect (GCP)
Atlanta, GA, USA
Tiger Analytics is looking for experienced Agentic AI Architect (GCP) with Gen AI experience to join our fast-growing advanced analytics consulting firm. Our employees bring deep expertise in Machine Learning, Data Science, and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world. You will be responsible for: Lead the design and development of an Agentic AI platform. Deep expertise in machine learning, system architecture, and AI agent frameworks to build scalable, autonomous systems. Architect and implement core systems for agent-based AI workflows. Design and deploy LLM-based pipelines, agent orchestration, and vector-based memory systems. Develop and optimize ML models, pipelines, and orchestration logic. Drive technical strategy, tooling, and infrastructure decisions. Architect and implement agentic AI systems leveraging GCP services (Vertex AI, BigQuery, Cloud Functions, Pub/Sub, etc.). Requirements Technical Skills Required: ·        Programming Languages: Proficiency in Python is essential; C++ experience is ideal but not required. ·        Agentic AI : Expertise in LangChain/LangGraph, CrewAI, Semantic Kernel/Autogen and Open AI Agentic SDK ·        Machine Learning Frameworks: Experience with TensorFlow, PyTorch, Scikit-learn, and AutoML. ·        Generative AI: Hands-on experience with generative AI models, RAG (Retrieval-Augmented Generation) architecture, and Natural Language Processing (NLP). ·        Cloud Platforms: Familiarity with AWS (SageMaker, EC2, S3) and/or Google Cloud Platform (GCP). ·        Data Engineering: Proficiency in data preprocessing and feature engineering. ·        Version Control: Experience with GitHub for version control. ·        Development Tools: Proficiency with development tools such as VS Code and Jupyter Notebook. ·        Containerization: Experience with Docker containerization and deployment techniques. ·        Data Warehousing: Knowledge of Snowflake and Oracle is a plus. ·        APIs: Familiarity with AWS Bedrock API and/or other GenAI APIs. ·        Data Science Practices: Skills in building models, testing/validation, and deployment. ·        Collaboration: Experience working in an Agile framework. Desired Skills: ·        RAG Architecture: Experience with data ingestion, data retrieval, and data generation using optimal methods such as hybrid search. ·        Insurance/Financial Domain: Knowledge of the insurance industry is a big plus. ·        Google Cloud Platform: Working knowledge is a preferred Additional Expertise: ·        Industry Experience: 8+ years of industry experience in AI/ML and data engineering, with a track record of working in large-scale programs and solving complex use cases using GCP AI Platform/Vertex AI. ·        Agentic AI Architecture: Exceptional command in Agentic AI architecture, development, testing, and research of both Neural-based & Symbolic agents, using current-generation deployments and next-generation patterns/research. ·        Agentic Systems: Expertise in building agentic systems using techniques including Multi-agent systems, Reinforcement learning, flexible/dynamic workflows, caching/memory management, and concurrent orchestration. Proficiency in one or more Agentic AI frameworks such as LangGraph, Crew AI, Semantic Kernel, etc. ·        Python Proficiency: Expertise in Python language to build large, scalable applications, conduct performance analysis, and tuning. ·        Prompt Engineering: Strong skills in prompt engineering and its techniques including design, development, and refinement of prompts (zero-shot, few-shot, and chain-of-thought approaches) to maximize accuracy and leverage optimization tools. ·        IR/RAG Systems: Experience in designing, building, and implementing IR/RAG systems with Vector DB and Knowledge Graph. ·        Model Evaluation: Strong skills in the evaluation of models and their tools. Experience in conducting rigorous A/B testing and performance benchmarking of prompt/LLM variations, using both quantitative metrics and qualitative feedback. Benefits This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.
Negotiable Salary
Craigslist
DAS TECHNICIAN
222-298 E Pearl St, Jackson, MS 39201, USA
HardHat National Accounts is hiring an experienced DAS Technician/Installers to oversee and manage in-building wireless (IBW) DAS installations and low-voltage projects. The ideal candidate will have hands-on experience leading crews, coordinating job sites, and ensuring the successful completion of projects on time and within budget. Locations: Sandston Virginia South Hill Virginia Atlanta Georgia Pay is up to $35/hour-Per diem is provided if you meet requirements, and is paid for all 7 days. We can be reached at 713.487.6911 by call or text for an immediate interview. Jobscope not limited to the following: Installation, maintenance, and troubleshooting of Distributed Antenna Systems (DAS) to enhance telecommunications coverage Reporting to the Technical Manager, your core skills in computer networking, cabling, and telecommunication will be essential in ensuring system reliability. Your premium skills in network installation, customer service, and CCTV will enhance client satisfaction and operational efficiency. Minimum experience required: Basic understanding of RF principles, wireless communication technology & DAS systems Testing & commissioning Troubleshooting & Problem Solving to address installation issues We offer competitive benefits plan, weekly pay and long term work. Please call/text for an immediate phone interview 713.487.6911 All estimates of project are mere estimates and not representative of a guaranteed period of employment. HardHat is always subject to our clients termination of a project or assignment
$35/hour
Workabale
ML Ops Architect
Dallas, TX, USA
Tiger Analytics is an advanced analytics consulting firm. We are the trusted analytics partner for several Fortune 100 companies, enabling them to generate business value from data. Our consultants bring deep expertise in Data Science, Machine Learning, and AI. Our business value and leadership have been recognized by various market research firms, including Forrester and Gartner. We are looking for a motivated and passionate Machine Learning Engineers for our team.   Job Description: As a Senior ML OPS Engineer, you will be joining a team of experienced Machine Learning Engineers that support, build, and enable Machine capabilities across the organization. You will work closely with internal customers and infrastructure teams to build our next generation data science workbench and ML platform and products. You will be able to further expand your knowledge and develop your expertise in modern Machine Learning frameworks, libraries and technologies while working closely with internal stakeholders to understand the evolving business needs. If you have a penchant for creative solutions and enjoy working in a hands-on, collaborative environment, then this role is for you. Requirements What you'll do in the role: Implement scalable and reliable systems leveraging cloud-based architectures, technologies and platforms to handle model inference at scale. Deploy and manage machine learning & data pipelines in production environments. Work on containerization and orchestration solutions for model deployment. Participate in fast iteration cycles, adapting to evolving project requirements. Collaborate as part of a cross-functional Agile team to create and enhance software that enables state-of-the-art big data and ML applications. Leverage CICD best practices, including test automation and monitoring, to ensure successful deployment of ML models and application code. Ensure all code is well-managed to reduce vulnerabilities, models are well-governed from a risk perspective, and the ML follows best practices in Responsible and Explainable AI. Collaborate with Data scientists, software engineers, data engineers, and other stakeholders to develop and implement best practices for MLOps, including CI/CD pipelines, version control, model versioning, monitoring, alerting and automated model deployment. Manage and monitor machine learning infrastructure, ensuring high availability and performance. Implement robust monitoring and logging solutions for tracking model performance and system health. Monitor real-time performance of deployed models, analyze performance data, and proactively identify and address performance issues to ensure optimal model performance. Troubleshoot and resolve production issues related to ML model deployment, performance, and scalability in a timely and efficient manner. Implement security best practices for machine learning systems and ensure compliance with data protection and privacy regulations. Collaborate with platform engineers to effectively manage cloud compute resources for ML model deployment, monitoring, and performance optimization. Develop and maintain documentation, standard operating procedures, and guidelines related to MLOps processes, tools, and best practices. Basic Qualifications: Master's or doctoral degree in computer science, electrical engineering, mathematics, or a similar field. Typically requires 7+ years of hands-on work experience developing and applying advanced analytics solutions in a corporate environment with at least 4 years of experience programming with Python. At least 3 years of experience designing and building data-intensive solutions using distributed computing. At least 3 years of experience productionizing, monitoring, and maintaining models Must have skills: Understanding of Azure stack like Azure Machine Learning, Azure Data Factory, Azure Databricks, Azure Kubernetes Service, Azure Monitor, etc. Demonstrated expertise in building and deploying AI/Machine Learning solutions at scale leveraging cloud such as AWS, Azure, or Google Cloud Platform. Experience in developing and maintaining APIs (e.g.: REST). Experience specifying infrastructure and Infrastructure as a code (e.g.: Ansible, Terraform). Experience in designing, developing & scaling complex data & feature pipelines feeding ML models and evaluating their performance. Ability to work across the full stack and move fluidly between programming languages and MLOps technologies (e.g.: Python, Spark, DataBricks, Github, MLFlow, Airflow). Expertise in Unix Shell scripting and dependency-driven job schedulers. Understanding of security and compliance requirements in ML infrastructure. Experience with visualization technologies (e.g.: RShiny, Streamlit, Python DASH, Tableau, PowerBI). Familiarity with data privacy standards, methodologies, and best practices. Benefits Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility.
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.