Please click on the Apply to verify the status of jobs posted more than 15 days ago, as they may have expired. Similar Jobs
Job Description
- Role Overview:
You will design, build, and maintain scalable and efficient data pipelines on AWS using services such as Glue, Lambda, EMR, S3, Redshift, Kinesis, and Athena. Your main focus will be on developing and optimizing ETL/ELT workflows to support large-scale analytics and reporting requirements. Collaborating with data scientists, analysts, and architects to understand data needs and translate them into scalable solutions will be a key part of your responsibilities. Additionally, you will implement data quality and validation processes to ensure integrity across pipelines, create and maintain infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation, and manage data security and access controls using AWS Identity and Access Management (IAM). Monitoring and troubleshooting pipeline performance and data issues, as well as providing technical expertise on AWS best practices and performance tuning, are also essential aspects of this role.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines on AWS using services such as Glue, Lambda, EMR, S3, Redshift, Kinesis, and Athena.
- Develop and optimize ETL/ELT workflows to support large-scale analytics and reporting requirements.
- Collaborate with data scientists, analysts, and architects to understand data needs and translate them into scalable solutions.
- Implement data quality and validation processes to ensure integrity across pipelines.
- Create and maintain infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation.
- Manage data security and access controls using AWS Identity and Access Management (IAM).
- Monitor and troubleshoot pipeline performance and data issues.
- Provide technical expertise on AWS best practices and performance tuning.
Qualifications Required:
- 7+ years of hands-on experience in data engineering, with at least 3+ years focused on AWS data services.
- Strong command over AWS services like Glue, Redshift, S3, Lambda, Kinesis, EMR, DynamoDB, Athena, RDS.
Looking to get Placed? Try our Placement Guarantee Plan
- Proficiency in Python, SQL, and PySpark for building and optimizing data workflows.
- Experience with data lake and data warehouse design and implementation.
- Knowledge of streaming data processing and real-time analytics.
- Experience with DevOps and CI/CD pipelines for data workflows.
- Familiarity with data governance, compliance, and security principles on AWS.
- Experience working with tools like Apache Airflow, dbt, or similar orchestration frameworks is a plus.
- Strong problem-solving skills, self-motivation, and capability of working independently.,
Skills
PythonData GovernanceEtlData ProcessingImplementationAnalyticsSqlData EngineerRedshiftComplianceS3GlueCICD PipelinesPythonLambdaApache AirflowRealtime AnalyticsData LakeStreaming Data ProcessingRDSDevOpsPySparkDbtSecurity PrinciplesKinesisData Warehouse DesignEMRAthenaAWS ServicesSQLDynamoDBIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
Application Deadline
02 Nov 25, 10:59 AM IST
Similar Jobs
View All

