Posted on: August 15, 2024ROLES & RESPONSIBILITIESCore Skills\xc2\xb7 Build pipelines to bring in wide variety of data from multiple sources within the organization as well as from social media and public data sources.\xc2\xb7 Collaborate with cross functional teams to source data and make it available for downstream consumption.\xc2\xb7 Work with the team to provide an effective solution design to meet business needs.\xc2\xb7 Ensure regular communication with key stakeholders, understand any key concerns in how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed.\xc2\xb7 Ensure dependencies and challenges (risks) are escalated and managed. Escalate critical issues to the Sponsor and/or Head of Data Engineering.\xc2\xb7 Ensure timelines (milestones, decisions and delivery) are managed and value of initiative is achieved, without compromising quality and within budget.\xc2\xb7 Ensure an appropriate and coordinated communications plan is in place for initiative execution and delivery, both internal and external.\xc2\xb7 Ensure final handover of initiative to business as usual processes, carry out a post implementation review (as necessary) to ensure initiative objectives have been delivered, and any lessons learned are fed into future initiative management processes.Who we are looking for:Competencies & Personal Traits\xc2\xb7 Work as a team player\xc2\xb7 Excellent problem analysis skills\xc2\xb7 Experience with at least one Cloud Infra provider (Azure/AWS)\xc2\xb7 Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataframe API) or Hive query language (HQL)\xc2\xb7 Experience in building streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Delta Lake\xc2\xb7 Knowledge of NOSQL databases. Good to have experience in Cosmos DB and GraphQL\xc2\xb7 Knowledge of Big data ETL processing tools\xc2\xb7 Experience with Hive and Hadoop file formats (Avro / Parquet / ORC)\xc2\xb7 Basic knowledge of scripting (shell / bash)\xc2\xb7 Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files\xc2\xb7 Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.\xc2\xb7 Basic understanding of DevOps practices using Git version control\xc2\xb7 Ability to debug, fine tune and optimize large scale data processing jobsWorking Experience\xc2\xb7 1-3 years of broad experience of working with Enterprise IT applications in cloud platform and big data environments.Professional Qualifications\xc2\xb7 Certifications related to Data and Analytics would be an added advantageEducation\xc2\xb7 Master/bachelor\'s degree in STEM (Science, Technology, Engineering, Mathematics)Language\xc2\xb7 Fluency in written and spoken EnglishEXPERIENCE
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.