Job Description
We are looking for an experienced Senior Data Engineer to design and implement scalable data architectures and AI-ready data products. The ideal candidate has deep expertise in the Databricks Lakehouse Platform, strong skills in AWS cloud services, and exposure to SAP data processing.
Key Responsibilities
- Architect and build scalable data pipelines, models, and products (Databricks, AWS).
- Manage end-to-end data lifecycle and develop enterprise data models.
- Integrate data from SAP ECC/S/4HANA and non-SAP systems.
- Develop batch, real-time, and streaming data solutions.
- Implement data governance, security, quality, and observability.
- Optimize performance and cost across platforms.
- Collaborate with cross-functional teams to deliver enterprise-ready data solutions.
- 10+ years in data engineering/data architecture.
- Strong expertise in Databricks (Delta Lake, Medallion Architecture, DLT, PySpark, SQL Warehouse, Unity Catalog).
- Proficiency in Python, SQL, PySpark.
- AWS experience: S3, Lambda, EMR, Redshift, Bedrock.
- Data modeling (ER & Dimensional) and metadata management.
Looking to get Placed? Try our Placement Guarantee Plan
- CI/CD and DevOps awareness.
- SAP S/4HANA data extraction (DataSphere, SLT, BDC).
- ABAP, CDS views, manufacturing domain experience.
Databricks Data Engineer/Architect certification (preferred).
Skills: data architecture,sap s/4hana data extraction,sql warehouse,pyspark workbooks,databricks,python,medallion architecture,dlt pipelines,data engineering,delta lake,datasphere, slt, bdc,databricks lakehouse platform
Skills
PythonData ArchitectureData GovernanceData ModelingData ExtractionData ProcessingData EngineerAiSqlIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
Application Deadline
04 Apr 26, 04:03 PM IST
Similar Jobs
View All



