Job Description
Location: Hyderabad
Experience Required: 4-8 years
Job Type: Full-Time
No of Positions: 01
We are looking for a Senior AWS Data Engineer to lead the design, development, and maintenance of scalable cloud-based data lakes, data warehouses, and analytics platforms. The ideal candidate will bring strong hands-on experience in AWS data services, ETL engineering, and data modeling, while collaborating closely with architects, analysts, DevOps teams, and business stakeholders.
This role requires strong ownership, problem-solving mindset, and the ability to work in complex, cross-functional environmentspreferably with exposure to Healthcare, Life Sciences, Genomics, Clinical/Pre-clinical data domains.
Key Responsibilities
- Design, build, and optimize end-to-end data pipelines and ETL/ELT workflows on AWS.
- Lead development and maintenance of data lakes and enterprise data warehouses, supporting analytical and operational workloads.
- Implement scalable ingestion and integration frameworks for structured and semi-structured data from multiple sources.
- Develop and manage data processing pipelines using AWS Glue (PySpark), SQL, and Python frameworks.
- Own and improve data modeling and data warehouse design (dimensional models, star/snowflake schema).
- Ensure robust data quality checks, validation frameworks, monitoring, alerting, and logging for production reliability.
- Collaborate with cloud/platform/DevOps teams to implement CI/CD automation and Infrastructure-as-Code best practices where applicable.
- Implement and enforce data governance, security, and compliance across the AWS data ecosystem.
- Perform performance tuning and cost optimization across services like Redshift, Athena, Glue, and S3.
- Mentor junior engineers, conduct code reviews, enforce best practices, and contribute to technical standards.
AWS Data Services
- Strong hands-on experience with:
- Amazon S3, AWS Glue, Lake Formation, Redshift, Athena, Lambda
- Experience with Redshift Spectrum and partitioned querying best practices.
- Strong ETL development experience using:
- AWS Glue (PySpark), Python, and SQL
- Experience working with:
- CSV, JSON, Parquet, and API-based data ingestion
- Batch and near-real-time ingestion patterns
- Strong expertise in:
- Dimensional modeling
- Star and snowflake schema design
- Designing scalable and maintainable datasets for BI/analytics use
- Proven experience implementing:
- Data quality frameworks and automated validation checks
- Metadata management, lineage, and governance concepts
- Access controls and policy-driven governance (Lake Formation preferred)
Looking to get Placed? Try our Placement Guarantee Plan
- Hands-on experience with:
- GitHub
- CodeBuild / CodePipeline
- CI/CD practices for data workflows
- Strong understanding of:
- IAM, KMS, encryption, secure access patterns
- AWS cloud security best practices
- Bachelors/Masters degree in Computer Science, Data Engineering, Information Systems, or related field.
- 48 years of experience in Data Engineering / Cloud Data Platforms.
- Exposure to Healthcare / Life Sciences / Pharma datasets and environments.
- Familiarity with compliance frameworks such as:
- GxP, HIPAA (good to have)
- Experience with visualization and analytics tools:
- QuickSight (optional)
- AWS Certifications (Good to have):
- AWS Certified Data Analytics Specialty
- AWS Certified Developer Associate
- AWS Certified Solutions Architect Associate
Skills
Data AnalyticsPythonData GovernanceData ModelingEtlData ProcessingSnowflakeVisualizationData EngineerAnalyticsSqlIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
Application Deadline
28 Mar 26, 05:25 PM IST
Similar Jobs
View All



