Published 2026-04-04
Job Summary
We are seeking a skilled Azure Databricks Engineer with strong experience in Azure Data Factory (ADF) to design, develop, and maintain scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have expertise in big data processing, ETL/ELT workflows, and distributed computing using Databricks.
Key Responsibilities Design and implement scalable data pipelines using Azure Data Factory (ADF) Develop and optimize data processing workflows using Azure Databricks (Py Spark/Scala) Build ETL/ELT processes for ingesting, transforming, and loading large datasets
Integrate data from multiple sources such as APIs, databases, and data lakes
Work with Azure Data Lake Storage (ADLS) and Azure SQL Database Monitor, troubleshoot, and optimize pipeline performance and data quality
Implement data security, governance, and compliance standards
Collaborate with data scientists, analysts, and stakeholders to deliver data solutions
Automate workflows and deployments using CI/CD pipelines
Maintain documentation for data architecture and processes
Required Skills & Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field
10+ years of experience in Azure data engineering
Strong hands-on experience with:
Azure Databricks (Py Spark preferred)
Azure Data Factory (ADF)
SQL and relational databases
Experience with big data technologies and distributed processing
Knowledge of data warehousing concepts and dimensional modeling
Familiarity with Azure services like:
Azure Data Lake Storage (ADLS Gen2)
Azure Synapse Analytics
Experience with version control (Git) and CI/CD tools
Strong problem-solving and debugging skills