Please click on the Apply to verify the status of jobs posted more than 15 days ago, as they may have expired. Similar Jobs
Job Description
Remark : - final round will be the F2F
Experience : - 10+ Years (Lead Role)
Responsibilities
- Design, develop, and maintain robust and scalable data pipelines using Apache Spark and Scala on the Databricks platform.
- Implement ETL (Extract, Transform, Load) processes for various data sources, ensuring data quality, integrity, and efficiency.
- Optimize Spark applications for performance and cost-efficiency within the Databricks environment.
- Work with Delta Lake for building reliable data lakes and data warehouses, ensuring ACID transactions and data versioning.
- Collaborate with data scientists, analysts, and other engineering teams to understand data requirements and deliver solutions.
- Implement data governance and security best practices within Databricks.
- Troubleshoot and resolve data-related issues, ensuring data availability and reliability.
- Stay updated with the latest advancements in Spark, Scala, Databricks, and related big data technologies.
- Proven experience as a Data Engineer with a strong focus on big data technologies.
- Expertise in Scala programming language for data processing and Spark application development.
- In-depth knowledge and hands-on experience with Apache Spark, including Spark SQL, Spark Streaming, and Spark Core.
- Proficiency in using Databricks platform features, including notebooks, jobs, workflows, and Unity Catalog.
- Experience with Delta Lake and its capabilities for building data lakes.
- Strong understanding of data warehousing concepts, data modeling, and relational databases.
Looking to get Placed? Try our Placement Guarantee Plan
- Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services.
- Experience with version control systems like Git.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Experience with other big data technologies like Kafka, Flink, or Hadoop ecosystem components.
- Knowledge of data visualization tools.
- Understanding of DevOps principles and CI/CD pipelines for data engineering.
- Relevant certifications in Spark or Databrick
Skills
Data VisualizationBig DataData GovernanceData ModelingData WarehousingData Warehousing ConceptsEtlData ProcessingVisualizationData EngineerSqlIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
Application Deadline
16 May 26, 02:14 PM IST
Similar Jobs
View All

