Senior Data Engineer

Department Icon Data Science Analytics & Machine Learning
149+ Applicants
Posted: 1 week ago
3-6 years
Gurgaon / Gurugram, Haryana
work from office

Posted: 1 week ago
|
Applicants: 149+
Job Description
Similar Jobs
Please verify your account first! Send OTP

Job Description

Role = Senior Data Engineer
Location = Gurgaon sector 32
Budget = 27 to 32 LPA
Work experience = 3+ to 6 years
Skills required for Senior Data Engineer =
  • Python very strong skills
  • Kafka
  • AWS
  • Spark-based data processing
  • CDC = change data capture. CDC data (Debezium)
  • Apache Airflow
  • Prefect
  • Good at SQL
  • DSA = Data Structures and Algorithms
  • Data lake
  • Docker, Kubernetes
  • HLD = high level design / LLD = low level design
  • Good at streaming data pipelines
Job brief
We are looking for a highly skilled Data Engineer with a strong foundation in programming, data structures and distributed data systems. The ideal candidate should have hands-on experience with Python or Go, deeply experienced in building batch and streaming pipelines using Kafka and Spark and comfortable working in a cloud-native (AWS) environment. This role involves building and optimizing scalable data pipelines that power analytics, reporting and downstream applications. You will work closely with data scientists, BI teams and platform engineers to deliver reliable, high-performance data systems aligned with business goals.
Responsibilities
  • Design, build and maintain scalable batch and streaming data pipelines.
  • Develop real-time data ingestion and processing systems using Kafka.
  • Build and optimize Spark-based data processing jobs (batch and streaming).
  • Write high-quality, production-grade code using Python or Go.
  • Apply strong knowledge of data structures, algorithms and system design to solve complex data problems.
  • Orchestrate workflows using Apache Airflow and other open-source tools.
  • Ensure data quality, reliability and observability across pipelines.
  • Work extensively on AWS (S3, EC2, IAM, EMR / Glue / EKS or similar services).
  • Collaborate with analytics and BI teams to support tools like Apache Superset.
  • Looking to get Placed? Try our Placement Guarantee Plan

  • Continuously optimize pipeline performance, scalability and cost.
Requirements And Skills
  • Strong proficiency in Python or Go (production-level coding required).
  • Excellent understanding of Data Structures and Algorithms.
  • Hands-on experience with Apache Kafka for real-time streaming pipelines.
  • Strong experience with Apache Spark (batch and structured streaming).
  • Solid understanding of distributed systems and data processing architectures.
  • Proficiency in SQL and working with large-scale datasets.
  • Hands-on experience with Apache Airflow for pipeline orchestration.
  • Experience working with open-source analytics tools such as Apache Superset.
  • Must have relevant experience of 3+ to 6 years.
  • Good To Have Experience with data lake architectures.
  • Understanding of data observability, monitoring and alerting.
  • Exposure to ML data pipelines or feature engineering workflows.
  • Education B.Tech / BE in Computer Science, Information Technology, or a related engineering discipline

Skills

PythonData ProcessingData EngineerAnalyticsMlSql

If an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.

Important dates & deadlines?

Application Deadline

28 Mar 26, 05:25 PM IST

Similar Jobs

View All
Loading...
Bag Logo
Jobaaj
Don't Miss out any Updates

Subscribe now for the latest job alerts
and never miss an update

Job Alert
Google hiring for Specific Roles Apply Now!
1 min ago
New Opportunity
Amazon is hiring freshers Apply Now!
5 min ago
Featured Jobs
Microsoft opening 50+ positions Apply Now!
10 min ago

Senior Data Engineer

Share with