Role Overview:
As a Data Engineer specializing in Databricks and AWS Glue, you will play a key role in designing, developing, and maintaining data pipelines and analytics solutions for our clients. You will collaborate with cross-functional teams to understand business requirements, architect scalable solutions, and ensure data reliability and performance.
Responsibilities:
• Design, build, and optimize data pipelines using Databricks and AWS Glue to ingest, transform, and load large volumes of data from various sources.
• Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions.
• Develop ETL workflows and data processing jobs to cleanse, aggregate, and enrich data for downstream analytics and reporting.
• Implement best practices for data governance, security, and compliance, ensuring data quality and integrity throughout the pipeline.
• Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner, optimizing performance and resource utilization.
• Collaborate with cloud architects to optimize infrastructure and resource provisioning for Databricks and AWS Glue environments.
• Stay up-to-date with the latest advancements in big data technologies and best practices, continuously improving our data engineering processes and toolsets.
Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field; Master's degree preferred.
• 3+ years of experience in data engineering or related roles, with a focus on building scalable data pipelines.
• Hands-on experience with Databricks and AWS Glue, including proficiency in Spark-based data processing and orchestration.
• Strong programming skills in Python, Scala, or Java, with a solid understanding of software development principles and best practices.
• Experience with cloud platforms such as AWS, Azure, or GCP, including knowledge of core services and infrastructure management.
• Familiarity with data modeling concepts, database systems, and SQL query optimization.
• Excellent problem-solving and analytical skills, with a strong attention to detail and commitment to delivering high-quality solutions.
• Ability to work independently and collaboratively in a fast-paced environment, with a passion for learning and continuous improvement.
Good to Have:
• Certification in Databricks or AWS related technologies.
• Experience with other big data tools and frameworks such as Apache Kafka, Apache Hadoop, or Apache Flink.
• Knowledge of machine learning concepts and libraries for advanced analytics and AI.
• Previous experience working with data visualization tools such as Tableau, Power BI, or Looker.
• Contributions to open-source projects or active participation in developer communities.
Experience: 1-3 Years
Location: Mumbai
Job Types: Full-time, Permanent
Pay: ?8,874.00 - ?45,599.09 per month
Benefits:
• Health insurance
• Provident Fund
Schedule:
• Day shift
• Monday to Friday
Experience:
• Azure Data Bricks: 1 year (Required)
Work Location: In person
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.