Please verify your account first! Send OTP

Job Overview

Functional Area


Work preferred

Work from Office


Min Experience

2 Years

Max Experience

4 Years


Algoscale is an emerging leader in the field of Big Data Analytics and AI. We’ve successfully delivered 260+ projects for clients from 25+ countries gathering positive feedback along the journey and followed by a large section of the community.

Job Overview

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing data architecture to support the next generation of products and data initiatives.

  • Gather and understand data requirements, build complex data pipelines, and work to achieve high-quality data ingestion goals.
  • Drive the development of cloud-based and hybrid data warehouses, data ingestion, and data profiling activities.
  • Analyzing, re-architecting, and re-platforming on-premise data lake to data platforms on AWS cloud using AWS/3rd party services.
  • Implementation and maintenance of standard data/technology deployment workflow to ensure that all deliverables/enhancements are delivered in a disciplined and robust manner.
  • Highly self-motivated and able to work independently as well as in a team environment.
Required Skills
  • Strong proficiency with Scala or Python language
  • Experience building and optimizing ETL flows, data pipelines, architectures, and data sets
  • Experience with Spark (If you have experience with other big data tools like Hadoop, HBase, MapReduce, Cassandra, DynamoDB, Kafka, AWS services, we can cross-train)
  • Hands-on with Data Integration & Ops tools such as AWS Glue, Apache Nifi, Airflow
  • Preferably experienced with data warehouse or lake (Redshift, BigQuery, Snowflake)
  • Good to have experience with SQL and NoSQL databases
Educational Qualification Preferred
  • B.E/B.Tech or any other related field
Experience Preferred
  • 2+ years of work experience
  • AWS certification (preferred)
Skills:- Python, Apache Spark, Apache Hadoop, Apache Cassandra, DynamoDB, Amazon DynamoDB, Amazon Web Services (AWS), Apache Kafka and SQL


AnalyticsBig Data AnalyticsData AnalyticsData ArchitectureData IntegrationData ProfilingEtlPythonSql