Machine Learning Engineer

Pyramid Consulting, Inc Junagadh, Gujarat, IN

Published 2026-04-02

Description

Job Title: MLOps Engineer (6–8 Years Experience)
Location: Remote India
Project Type: Permanent Role
Working Hours: 8 AM – 5 PM UK Time
Engagement Work: Migration of Data Science models over next few months, followed by involvement in data science model creation, operations and enhancements
Role Overview
The client is currently operating data workflows in GCP and is in the process of migrating their only the Data Science workloads to Azure Databricks. Input data originates in GCP (was and will continue to) , data science workflows to be executed in Azure Databricks (end state), and model outputs are written back to GCP (was and will continue to).
The MLOps Engineer will play a critical role in supporting model migration , building and optimizing MLOps workflows , and eventually contributing to the broader data movement automation and self‑serve capabilities across cloud environments.
This role requires someone who deeply understands Databricks internals, Py Spark , CI/CD orchestration, and ML model operationalization. Along with knowledge of GCP and Azure.
Key Responsibilities
1. Model Migration & Optimization
Support migration of existing ML models from GCP to Azure Databricks. Understand existing model architecture and replicate/optimize it in Azure Databricks.
Work closely with the Data Science team to operationalize migrated models and further optimize to reduce compute cost and increase test coverag.
2. MLOps & Workflow Orchestration
Set up robust CI/CD pipelines using Git Hub Actions for ML model deployments in Databricks.
Implement and manage MLflow for tracking, versioning, and managing model lifecycle.
Build efficient and scalable Data & ML pipelines using Databricks + Py Spark.
3. Cloud & Data Movement Support
Collaborate with the Data Engineering team regarding data movement between GCP → Azure Databricks. In the future, take over parts of cross‑cloud data movement from the DE team and build self‑serve automation for data flows
Build pipelines where outputs from Azure Databricks must be transferred back to GCP.
4. Architecture & Best Practices
Provide architectural inputs and workflow optimization guidance during and after migration.
Ensure scalable, cost‑efficient, and reliable model execution in Databricks.
Improve testing, monitoring, and performance tuning for migrated and future ML models.
Required Experience
6–8 years of experience in Data Engineering, ML Engineering, or MLOps roles.
Must‑Have Skills
Strong hands‑on expertise in Databricks and deep understanding of how it works under the hood.
Proficiency in Py Spark : writing scalable jobs, understanding execution plans, and optimization techniques.
Experience building CI/CD pipelines using Git Hub Actions.
Experience with MLflow for tracking and operationalizing ML models.
Knowledge of integrating workflows between GCP and Azure ecosystems.
Strong debugging, optimization, and cost‑efficiency mindset.
Good to Have
Experience with cross‑cloud data movement patterns.
Familiarity with DS model structures and ability to collaborate closely with DS teams.

Location

Junagadh
Gujarat
India
Advertisement:



Attributes

Job type Full time
Contract type Permanent
Salary type Monthly
Occupation Machine learning engineer
Send resume
Pyramid Consulting, Inc
Pyramid Consulting, Inc
232 active jobs
Registered 2023-06-01
India
All vacancies from employers (232) Report vacancy
Send resume
Are you looking for a job? Publish your resume
Non-logged user
Hello wave
Welcome! Sign in or register