Data engineer Jd:
Roles & Responsibilities:
Design and Develop Data Infrastructure: Architect, build, and optimize scalable data infrastructure, including data warehouses, data lakes, and ETL/ELT processes.
Data Integration: Implement efficient and reliable data integration pipelines, ensuring the smooth flow of data from various sources to target systems.
Data Modeling and Schema Design: Design and implement data models and schemas that support business requirements and optimize data retrieval and analysis.
Data Quality and Governance: Develop and implement data quality checks, data validation processes, and data governance practices to maintain data integrity and accuracy.
Performance Optimization: Identify performance bottlenecks, optimize data processing and storage systems, and improve overall data pipeline efficiency.
Data Security: Ensure data privacy and security by implementing appropriate access controls, encryption techniques, and data protection measures.
Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand their data needs and provide technical support.
Documentation and Monitoring: Create and maintain technical documentation, data dictionaries, and monitoring systems to track data pipeline performance and identify issues. Troubleshooting and Issue Resolution: Investigate and resolve data-related issues, including data quality issues, system failures, and performance degradation.
Data Architecture and Technology Evaluation: Stay updated with industry trends and emerging technologies in data engineering, and evaluate their potential impact and suitability for the organization.
INTERNAL
Qualifications External/Internal What you will need to succeed in the role: (Minimum Qualification and Skills Required)
3-8 years of experience in data engineering, data integration, or related roles.
Strong proficiency in database management/development and working experience with relational databases, data modeling, and schema design.
Proficiency in data integration tools, ETL/ELT frameworks, and data pipeline orchestration tools.
Strong programming skills in languages such as Python/Java.
Understanding of data warehousing concepts, dimensional modeling, and data governance practices.
Knowledge of data security and privacy principles, as well as experience implementing data security measures.
Working experience with cloud platforms (AWS, Azure, GCP) and their data services AWS Glue, Azure Data Factory, GCP Dataflow.
Having knowledge of Docker & Kubernetes will be added advantage
Have good understanding on Shell/Bash scripting
Good analytical skills to present and understand statistical trends and forecast
Excellent problem-solving skills, attention to detail, and the ability to work effectively in a team environment. About Virtusa
Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 30,000 people globally that cares about your growth -- one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.
Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.
Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.