Junior AI/MLOps Engineer
Infinitive is a data and AI consultancy that enables its clients to modernize and operationalize their data to create lasting and substantial value. We bring deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable measurable value.
Infinitive has been named Best Small Firms to Work For by Consulting Magazine 8 times, most recently in 2025. Infinitive has also been named a Washington Post Top Workplace, Washington Business Journal Best Places to Work, and Virginia Business Best Places to Work.
Role Overview
As a Junior AI/MLOps Engineer, you will sit at the intersection of Data Science and Software Engineering. Your mission is to help us build, deploy, and monitor the automated pipelines that keep our machine learning models running smoothly in production. You aren't just building models; you’re building the "factory" that produces them.
Key Responsibilities
Pipeline Automation: Assist in building and maintaining CI/CD pipelines specifically for machine learning (CT - Continuous Training).
Model Deployment: Package ML models into reproducible environments using Docker and deploy them via REST APIs or batch processing.
Monitoring & Logging: Help set up dashboards to track model performance, data drift, and system health.
Infrastructure as Code: Work with senior engineers to manage cloud resources (AWS/GCP/Azure) using tools like Terraform or CloudFormation.
Collaboration: Bridge the gap between Data Scientists (who build the models) and Software Engineers (who build the product) to ensure seamless integration.
Required Skills & Qualifications
Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related technical field.
Programming: Proficiency in Python (specifically libraries like Pandas, NumPy, and Scikit-learn).
Foundational ML: A strong understanding of the ML lifecycle—from data preprocessing and feature engineering to evaluation metrics.
Containerization: Familiarity with Docker and the concept of containerized applications.
Version Control: Strong command of Git (branching, merging, and Pull Requests).
Preferred (Bonus) Skills
Experience with MLOps tools like MLflow, Kubeflow, or DVC.
Exposure to cloud platforms (AWS SageMaker, Google Vertex AI, or Azure ML).
Basic understanding of Kubernetes or orchestration tools.
Knowledge of SQL and NoSQL databases.
Why You’ll Love This Role
Impact: You will see your work directly influence how models perform in the real world.
Growth: You’ll be mentored by senior engineers in one of the fastest-growing niches in tech.
Innovation: We encourage experimenting with new tools to solve the "unsolved" problems of AI reliability.