Databricks + Pyspark

Year    AP, IN, India

Job Description

Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines using PySpark and Databricks for ETL processing of large volumes of data.


Cloud Integration: Develop solutions leveraging Databricks on cloud platforms (AWS/Azure/GCP) to process and analyze data in a distributed computing environment.


Data Modeling: Build robust data models, ensuring high-quality data integration and consistency across multiple data sources.


Optimization: Optimize PySpark jobs for performance, ensuring the efficient use of resources and cost-effective execution.


Collaborative Development: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights.


Automation & Monitoring: Implement monitoring solutions for data pipeline health, performance, and failure detection.


Documentation & Best Practices: Maintain comprehensive documentation of architecture, design, and code. Ensure adherence to best practices for data engineering, version control, and CI/CD processes.


Mentorship: Provide guidance to junior data engineers and help with the design and implementation of new features and components.


________________________________________


Required Skills & Qualifications:


Experience: 6+ years of experience in data engineering or software engineering roles, with a strong focus on PySpark and Databricks.


Technical Skills:


Proficient in PySpark for distributed data processing and ETL pipelines.


Experience working with Databricks for running Apache Spark workloads in a cloud environment.


Solid knowledge of SQL, data wrangling, and data manipulation.


Experience with cloud platforms (AWS, Azure, or GCP) and their respective data storage services (S3, ADLS, BigQuery, etc.).


Familiarity with data lakes, data warehouses, and NoSQL databases (e.g., MongoDB, Cassandra, HBase).


Experience with orchestration tools like Apache Airflow, Azure Data Factory, or DBT.


Familiarity with containerization (Docker, Kubernetes) and DevOps practices.


Problem Solving: Strong ability to troubleshoot and debug issues related to distributed computing, performance bottlenecks, and data quality.


Version Control: Proficient in Git based workflows and version control.


Communication Skills: Excellent written and verbal communication skills, with the ability to explain complex technical concepts to both technical and non-technical stakeholders.


Education: Bachelor or Masters degree in Computer Science, Engineering, or a related field (or equivalent practical experience).

About Virtusa





Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth -- one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.



Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.



Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3622508
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Contract
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    AP, IN, India
  • Education
    Not mentioned
  • Experience
    Year