Line of Service AdvisoryIndustry/Sector Not ApplicableSpecialism Data, Analytics & AIManagement Level Associate & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.Azure Data Engineer + Databricks Developer:
Work Location: PAN India
Necessary skills / tools:
SQL & Python / PySpark
Azure Services: ADF, Databricks, Synapse, ADLS, App Services
Databricks: Lakehouse concept, Unity Catalog
Data warehousing
Data modelling
:
Analyses current business practices, processes, and procedures as well as identifying future
business opportunities for leveraging Microsoft Azure Data & Analytics Services.
Provide technical leadership and thought leadership as a senior member of the Analytics
Practice in areas such as data access & ingestion, data processing, data integration, data
modelling, database design & implementation, data visualization, and advanced analytics.
Engage and collaborate with customers to understand business requirements/use cases and
translate them into detailed technical specifications.
Develop best practices including reusable code, libraries, patterns, and consumable frameworks
for cloud-based data warehousing and ETL.
Maintain best practice standards for the development or cloud-based data warehouse
solutioning including naming standards.
Designing and implementing highly performant data pipelines from multiple sources using
Apache Spark and/or Azure Databricks
Integrating the end-to-end data pipeline to take data from source systems to target data
repositories ensuring the quality and consistency of data is always maintained
Working with other members of the project team to support delivery of additional project
components (API interfaces)
Evaluating the performance and applicability of multiple tools against customer requirements
Working within an Agile delivery / DevOps methodology to deliver proof of concept and
production implementation in iterative sprints.
Integrate Databricks with other technologies (Ingestion tools, Visualization tools).
Proven experience working as a data engineer
Highly proficient in using the spark framework (python and/or Scala)
Extensive knowledge of Data Warehousing concepts, strategies, methodologies.
Direct experience of building data pipelines using Azure Data Factory and Apache Spark
(preferably in Databricks).
Hands on experience designing and delivering solutions using Azure including Azure Storage,
Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics
Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake,
and Azure App Service is required.
Designing and building of data pipelines using API ingestion and Streaming ingestion methods.
Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential.
Thorough understanding of Azure Cloud Infrastructure offerings.
Strong experience in common data warehouse modelling principles including Kimball.
Working knowledge of Python is desirable
Experience developing security models.
Databricks & Azure Big Data Architecture Certification would be plus
Must be team oriented with strong collaboration, prioritization, and adaptability skills requiredMandatory skill sets-SQL & Python / PySpark, AWS Services, Glue, Appflow, Redshift, Data warehousing, Data modelling
Preferred skill sets-SQL & Python / PySpark, AWS Services, Glue, Appflow, Redshift, Data warehousing, Data modelling
Year of experience required-3-10
Qualifications-BE / B.Tech / MCA / M.TechEducation (if blank, degree and/or field of study not specified) Degrees/Field of Study required:Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)Required Skills Python (Programming Language), Structured Query Language (SQL)Optional SkillsDesired Languages (If blank, desired languages not specified)Travel RequirementsAvailable for Work Visa Sponsorship?Government Clearance Required?Job Posting End Date
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.