Associate

Year    Kolkata, West Bengal, India

Job Description


Line of Service AdvisoryIndustry/Sector Not ApplicableSpecialism Data, Analytics & AIManagement Level Associate & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.Azure Data Engineer + Databricks Developer:
Work Location: PAN India
Necessary skills / tools:
\xe2\x97\x8f SQL & Python / PySpark
\xe2\x97\x8f Azure Services: ADF, Databricks, Synapse, ADLS, App Services
\xe2\x97\x8f Databricks: Lakehouse concept, Unity Catalog
\xe2\x97\x8f Data warehousing
\xe2\x97\x8f Data modelling
:
\xe2\x97\x8f Analyses current business practices, processes, and procedures as well as identifying future
business opportunities for leveraging Microsoft Azure Data & Analytics Services.
\xe2\x97\x8f Provide technical leadership and thought leadership as a senior member of the Analytics
Practice in areas such as data access & ingestion, data processing, data integration, data
modelling, database design & implementation, data visualization, and advanced analytics.
\xe2\x97\x8f Engage and collaborate with customers to understand business requirements/use cases and
translate them into detailed technical specifications.
\xe2\x97\x8f Develop best practices including reusable code, libraries, patterns, and consumable frameworks
for cloud-based data warehousing and ETL.
\xe2\x97\x8f Maintain best practice standards for the development or cloud-based data warehouse
solutioning including naming standards.
\xe2\x97\x8f Designing and implementing highly performant data pipelines from multiple sources using
Apache Spark and/or Azure Databricks
\xe2\x97\x8f Integrating the end-to-end data pipeline to take data from source systems to target data
repositories ensuring the quality and consistency of data is always maintained
\xe2\x97\x8f Working with other members of the project team to support delivery of additional project
components (API interfaces)
\xe2\x97\x8f Evaluating the performance and applicability of multiple tools against customer requirements
\xe2\x97\x8f Working within an Agile delivery / DevOps methodology to deliver proof of concept and
production implementation in iterative sprints.
\xe2\x97\x8f Integrate Databricks with other technologies (Ingestion tools, Visualization tools).
\xe2\x97\x8f Proven experience working as a data engineer
\xe2\x97\x8f Highly proficient in using the spark framework (python and/or Scala)
\xe2\x97\x8f Extensive knowledge of Data Warehousing concepts, strategies, methodologies.
\xe2\x97\x8f Direct experience of building data pipelines using Azure Data Factory and Apache Spark
(preferably in Databricks).
\xe2\x97\x8f Hands on experience designing and delivering solutions using Azure including Azure Storage,
Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics
\xe2\x97\x8f Experience in designing and hands-on development in cloud-based analytics solutions.\xe2\x97\x8f Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake,
and Azure App Service is required.
\xe2\x97\x8f Designing and building of data pipelines using API ingestion and Streaming ingestion methods.
\xe2\x97\x8f Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential.
\xe2\x97\x8f Thorough understanding of Azure Cloud Infrastructure offerings.
\xe2\x97\x8f Strong experience in common data warehouse modelling principles including Kimball.
\xe2\x97\x8f Working knowledge of Python is desirable
\xe2\x97\x8f Experience developing security models.
\xe2\x97\x8f Databricks & Azure Big Data Architecture Certification would be plus
\xe2\x97\x8f Must be team oriented with strong collaboration, prioritization, and adaptability skills requiredMandatory skill sets-SQL & Python / PySpark, AWS Services, Glue, Appflow, Redshift, Data warehousing, Data modelling
Preferred skill sets-SQL & Python / PySpark, AWS Services, Glue, Appflow, Redshift, Data warehousing, Data modelling
Year of experience required-3-10
Qualifications-BE / B.Tech / MCA / M.TechEducation (if blank, degree and/or field of study not specified) Degrees/Field of Study required:Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)Required Skills Python (Programming Language), Structured Query Language (SQL)Optional SkillsDesired Languages (If blank, desired languages not specified)Travel RequirementsAvailable for Work Visa Sponsorship?Government Clearance Required?Job Posting End Date

PwC

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3356793
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Kolkata, West Bengal, India
  • Education
    Not mentioned
  • Experience
    Year