Primary skills PySpark Responsible for developing and maintaining scalable data processing pipelines optimizing performance, and ensuring data quality and integrity Proficiency in Python is mandatory Familiarity with GCP cloud platforms and their big data services is mandatoryExperience with Hadoop ecosystem (HDFS, Hive, HBase) is a plusGood communication skills Position count - 1Experience - 3-5 years Place of work - Gurgaon (no offsite) Immediate joiner
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.