Job Description
Title: Data Engineer
Duration: 6 Months (with possible extension)
Experience- 10+ years
Work Location: Remote
Screening Checklist:
- Proficiency in interpreting data transformation logic written in T-SQL and implementing equivalent processes within Databricks
- Ability to design and implement data ingestion pipelines using Azure Data Factory (from source to RAW layer)
- Basic knowledge of C# and Sql (atleast read the coding, no need to write)
- Experience in collecting and analyzing performance metrics to optimize data ingestion pipelines
- Competence in performing performance optimizations for Databricks read/write queries as needed
Job Overview
We are currently seeking experienced Data Engineers (57 years of experience) with strong expertise in Databricks, PySpark, and Data Fabric concepts to contribute to an ongoing enterprise data transformation initiative. The ideal candidates will have solid hands-on engineering skills, a good understanding of modern data architectures, and the ability to work collaboratively within cross-functional teams.
Key capabilities and expectations include:
- Strong experience in understanding and translating data transformation logic written in T-SQL and implementing equivalent, efficient transformations in Databricks using PySpark, aligned with Data Fabric design principles.
Looking to get Placed? Try our Placement Guarantee Plan
- Hands-on experience in designing and implementing data ingestion pipelines using Azure Data Factory, enabling reliable data movement from source systems to the RAW and curated data layers within a Data Fabric ecosystem.
- Working knowledge of Data Fabric concepts, including metadata-driven pipelines, data integration, orchestration, data lineage, and governance, with the ability to apply these principles in day-to-day engineering tasks.
- Experience in monitoring, collecting, and analyzing pipeline performance metrics to identify inefficiencies and support optimization of data ingestion and processing workflows.
- Practical experience in performance tuning and optimization of Databricks read and write operations, including partitioning, file formats, and query optimization techniques.
- Ability to collaborate closely with senior engineers and architects, contribute to design discussions, follow best practices, and support the continuous improvement of the data platform.
- Strong problem-solving skills, eagerness to learn, and the ability to work effectively with cross functional teams, including data analysts, data scientists, and business stakeholders.
This role is ideal for professionals looking to deepen their expertise in Databricks and Data Fabric
architectures while contributing to scalable, well-governed, and high-performance enterprise data
solutions.
Skills
Data IntegrationData EngineerSqlIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
Application Deadline
28 Mar 26, 05:25 PM IST
Similar Jobs
View All



