Please click on the Apply to verify the status of jobs posted more than 15 days ago, as they may have expired. Similar Jobs
Job Description
- Develop and maintain data pipelines using Apache Spark for efficient processing of large-scale data.
- Implement data processing tasks using Apache Hadoop and leverage its distributed file system (HDFS) for data storage.
- Utilize Git for version control, ensuring a clean and organized codebase, and collaborate effectively with team members.
- Work with Jupyter Notebooks for exploratory data analysis, data cleaning, and prototyping data processing workflows.
- Use Docker to containerize applications and their dependencies, ensuring consistency across different environments and facilitating deployment.
- Implement log aggregation, monitoring, and data analysis using Splunk, gaining insights from machine-generated data.
- Utilize the ELK Stack (Elasticsearch / Logstash / Kibana) for log analysis, visualization, and real-time data exploration.
- Work with Apache Kafka to build real-time data pipelines and event-driven architectures for streaming data processing.
- Utilize cloud platforms such as AWS, Azure, and Google Cloud to design and implement scalable data engineering solutions.
- Develop data processing applications in Java and Python, leveraging the strengths of each language for specific tasks.
- Collaborate with cross-functional teams, including data scientists and analysts, to understand data requirements and ensure data quality and integrity.
- Optimize and tune data processing workflows for performance and scalability.
- Stay up-to-date with emerging data engineering technologies, tools, and best practices, and apply them to enhance data processing capabilities.
Looking to get Placed? Try our Placement Guarantee Plan
- Bachelors degree in Computer Science, Data Science, or a related field.
- 2+ years of experience in data engineering or a related role.
- Proficiency in Apache Spark and Apache Hadoop for large-scale data processing.
- Strong experience with Git for version control and collaborative development.
- Hands-on experience with Jupyter Notebooks for data analysis and prototyping.
- Familiarity with containerization using Docker.
- Knowledge of the ELK Stack (Elasticsearch / Logstash / Kibana) or Apache Kafka.
- Experience with cloud platforms such as AWS, Azure, and Google Cloud for data engineering tasks.
- Strong programming skills in Java and Python for data processing and application development.
- Familiarity with data modeling, data warehousing, and ETL processes.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Skills
Data AnalysisData ModelingData ProcessingData QualityData ScienceData WarehousingDesigningEtlPythonQualitySplunkVisualizationIf an employer asks you to pay any kind of fee, please notify us immediately. Jobaaj does not charge any fee from the applicants and we do not allow other companies also to do so.
About Company
Credence is a dynamic organization dedicated to providing innovative solutions in various industries. With a focus on excellence and integrity, Credence offers a wide range of services, including consulting, technology, and outsourcing. At Credence, we foster a culture of growth and empowerment, where employees are encouraged to reach their full potential. Credence careers provide exciting opportunities for individuals to thrive in a collaborative environment and make a meaningful impact on projects that shape the future. Join Credence and embark on a rewarding career path where your skills and talents are valued, and professional growth is encouraged.
Important dates & deadlines?
Application Deadline
17 Sep 23, 11:58 PM IST
Similar Jobs
View All

