Interesting Job Opportunity: GCP Data Engineer - Spark/Hive
HuQuo- 1 month ago
- Pune, Maharashtra, India
- Full Time
Please verify your account first! Send OTP
Job Overview
Functional Area
Data
Work preferred
Work from Office
Experience
Min Experience
5 Years
Max Experience
7 Years
Description
Job Description
- 5+ Years of Experience in Data Engineering and building and maintaining large-scale data pipelines.
- Experience with designing and implementing a large-scale Data-Lake on Cloud Infrastructure
- Strong technical expertise in Python and SQL
- Extremely well-versed in Google Compute Platform including BigQuery, Cloud Storage, Cloud Composer, DataProc, Dataflow, Pub/Sub.
- Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)
- Experience Developing DAGs in Apache Airflow 1.10.x or 2. x
- Good Problem-Solving Skills
- Detail Oriented
- Strong Analytical skills working with a large store of Databases and Tables
- Ability to work with geographically diverse teams.
- Certification in GCP service.
- Experience with Kubernetes.
- Experience with Docker
- Experience with CircleCI for Deployment
- Experience with Great Expectations.
- Build Data and ETL pipelines in GCP.
- Support migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, Java
- Interact with customers on daily basis to ensure smooth engagement.
- Responsible for timely and quality deliveries.
- Fulfill organization responsibilities : Sharing knowledge and experience with the other groups in the organization, and conducting various technical training sessions.
Skills
Cloud InfrastructureDesigningEtlPythonQualitySqlTraining