Line of Service AdvisoryIndustry/Sector Not ApplicableSpecialism Data, Analytics & AIManagement Level Senior Associate & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.Broad Role / Responsibilities We are seeking a highly skilled and motivated Data Engineer Developer with 6 to 9 years of experience
to join our dynamic team. The ideal candidate must have strong hands-on expertise in technologies such as Spark, Scala, Hadoop,
SQL, and demonstrated exposure to Azure cloud services. The Data Engineer Developer will play a crucial role in designing,
implementing, and maintaining robust data pipelines, ensuring the efficient flow and processing of large datasets. \xc2\xb7 Data Pipeline
Development: Design, develop, and maintain scalable and efficient data pipelines using Spark and Scala. Implement ETL processes
for ingesting, transforming, and loading data from various sources. \xc2\xb7 Big Data Technologies: Work with Hadoop ecosystem
components such as HDFS, Hive, and HBase for efficient storage and retrieval of large-scale datasets. Optimize and tune Spark jobs
to ensure optimal performance and resource utilization. \xc2\xb7 SQL Expertise: Utilize strong SQL skills to query, analyse, and manipulate
data stored in relational databases and data warehouses. \xc2\xb7 Security - Implement security and data protection measures, at all levels -
DB, API services. Apply Data masking and row-level and column-level security. Keep abreast of latest security issues and incorporate
necessary patches and updates. \xc2\xb7 Testing and Debugging - Write and maintain test code to validate functionality. Debug applications
and troubleshoot issues as they arise. \xc2\xb7 Collaboration and Communication - Collaborate with cross-functional teams including
Database engineers, data integration engineers, reporting teams and product development. Communicate complex data findings in a
clear and actionable manner to non-technical stakeholders. \xc2\xb7 Continual Learning- Keep up to date with emerging tools, techniques,
and technologies in data technologies. Engage in self-improvement and continuous learning opportunities to maintain expertise in
the data science domain. \xc2\xb7 End to end understanding of project and infrastructure involving multiple technologies (Big Data
Analytics) \xc2\xb7 Proactively identify problem areas & concerns related to data in the project; exploration of ways to tackle the issues and
come-up with optimal solutions. \xc2\xb7 Creation of FRS/SRS/Design documents and other technical documents.
\xc2\xb7 Prepare lessons learned documentation for projects / engagements. Develop best practices and tools for project execution and
management. Nice to Have: \xc2\xb7 Exposure to Azure Cloud. \xc2\xb7 Experience of working in Travel and logistics domain is preferred. \xc2\xb7
Familiarity with data streaming technologies (e.g., Apache Kafka). \xc2\xb7 Exposure to containerization and orchestration tools (e.g.,
Docker, Kubernetes). \xc2\xb7 Knowledge of machine learning concepts and frameworks.
Broad Experience & Expertise Requirements 6 to 9 years of hands-on experience in handling large data volumes, Data Engineering
using Big Data, Hadoop (HDFS, Hive, Hbase), Scala, Spark (Spark Core, Spark SQL, Spark Streaming), Python, PySpark, SQL, ETL,
Databricks, Data modelling, Azure Cloud, Data Pipelines, CI/CD, Docker, Containers, GIT, etc.. Knowledge & experience in handling
structured, semi-structured and unstructured data sets. Specific Past Work Experience Requirements \xc2\xb7 6+ Years of relevant
experience in the above technologies. \xc2\xb7 3 to 5 years of Consulting experience in Technology Domain, handling data projectsEducation (if blank, degree and/or field of study not specified) Degrees/Field of Study required:Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)Required Skills Apache SparkOptional SkillsDesired Languages (If blank, desired languages not specified)Travel RequirementsAvailable for Work Visa Sponsorship?Government Clearance Required?Job Posting End Date
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.