Data Engineer (AWS & Databricks & PySpark)
Please click on the Apply to verify the status of jobs posted more than 15 days ago, as they may have expired. Similar Jobs
Job Description
As an Data Engineer (AWS & Databricks & PySpark) at Gainwell, you can contribute your skills as we harness the power of technology to help our clients improve the health and well-being of the members they serve a communitys most vulnerable. Connect your passion with purpose, teaming with people who thrive on finding innovative solutions to some of healthcares biggest challenges. This role is strictly involved in the development of the product and does not involve access to Protected Health Information (PHI) & Personally Identifiable Information (PII) or any secured/confidential client data. The work is limited to application development and does not include handling or processing of sensitive health information.
Your role in our mission
- Lead the design and development of highly scalable, robust data pipelines and ETL/ELT solutions using Databricks and Apache Spark.
- Provide expertise in Databricks workspace, cluster, and job management, focusing on performance tuning, cost optimisation (DBU reduction), autoscaling, and scheduling.
- Define and enforce technical standards, design patterns, and best practices for data modelling, quality, and schema management across the platform.
- Implement and manage security protocols, secret management, encryption, and regulatory compliance within the data environment (DevSecOps).
- Drive the adoption of DataOps principles by integrating Databricks workloads with enterprise CI/CD tools (e.g., Azure DevOps, GitHub Actions) for automated code and model lifecycle management.
- Act as the technical liaison, engaging with stakeholders to gather requirements, communicate complex designs, and ensure data solutions meet business needs for quality and consistency.
- Provide technical guidance and mentorship to junior team members, lead project planning and estimation, and create clear architecture and process documentation.
- Focus on maintaining, debugging, and optimising existing production data systems to ensure reliability and low technical debt.
- Ensure data extraction, transformation, loading, normalisation, cleansing, and updating to maintain database integrity and security.
- Collaborate on the development and documentation of data models, schemas, and source-to-target mappings within the data warehouse.
- Conceptualise and visualise scalable data frameworks.
- Communicate effectively with internal and external stakeholders to support project objectives.
- Demonstrated expertise in the Databricks platform (AWS/Azure), including deep knowledge of Spark fundamentals, Delta Lake architecture, and Unity Catalog.
- Strong knowledge of cloud platforms (AWS preferred) and advanced proficiency in SQL, data modelling, data warehousing concepts, and SQL optimisation.
- Proven experience with CI/CD tools and hands-on implementation of DataOps and MLOps principles.
- Familiarity and experience in implementing data governance and data quality practices.
- Demonstrated ability in project planning, low-level design (LLD), accurate estimation, and managing project delivery at a team level.
- Excellent verbal and written communication skills with proven experience in effective stakeholder management.
- Knowledge of Agile Scrum and SAFe methodologies, with a strong sense of ownership and accountability.
- Proven track record of mentoring and guiding team members to foster technical growth.
- Strong debugging skills and the ability to ask critical questions to drive clarity in complex technical situations.
Looking to get Placed? Try our Placement Guarantee Plan
- 10+ years of overall experience with minimum 5 years experience with big data technologies on AWS, Azure, or GCP.
- Bachelors degree in computer science or a related field, or equivalent education.
- At least 3 years hands-on experience with Databricks or Apache Spark (Python/Scala).
- Proficient in working with various database structures, including transactional and data warehouse environments.
- Databricks and AWS developer/architect certifications are highly desirable.
- Demonstrated ability to lead and mentor engineering teams.
- Excellent communication abilities.
- Capacity to deliver impactful results across multiple concurrent projects.
- Strategic thinker with the ability to balance immediate and long-term objectives.
- Willingness to travel as required.
Work Environment - Remote
Skills
Big DataPythonData GovernanceData WarehousingData Warehousing ConceptsEtlData ExtractionImplementationScrumData EngineerSqlIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
About Company
Inowell Technologies is a leading innovator in the tech industry, dedicated to developing cutting-edge solutions that enhance everyday life and streamline business operations. Careers at Inowell Technologies are dynamic and fulfilling, offering professionals the chance to work on transformative projects in areas such as artificial intelligence, software development, and sustainable technology. Inowell Technologies careers are suited for those passionate about pushing the boundaries of technology and making a significant impact in a rapidly evolving field. The company fosters a culture of creativity, collaboration, and continuous learning, making it an ideal workplace for tech enthusiasts looking to advance their careers.
Important dates & deadlines?
Application Deadline
01 Apr 26, 03:42 PM IST
Similar Jobs
View All



