Job Title : Big Data Senior Developer Experience Range: 8 - 12 Years Location : Hyderabad : Responsible for the designing, building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big data scales. Contributes to design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and performing high-end analytics. Owns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform\'s quality, robustness, maintainability, and performance. Responsibilities include: Work closely with the Architecture group in delivering technical solutions. Experience with designing and building large-scale data processing systems Participate in the testing of prototypes & validate test procedures to ensure that they are applicable to the design. Perform root-cause analysis (RCA) of complex issues ranging from hardware, operating system, application, network, and information security platforms while working closely with various infrastructure teams and business users to quickly arrive at creative, tactical, and long-term solutions. Excellent verbal and written communication skills, especially in technical communications Experience in analyzing data to identify deliverables, gaps, and inconsistencies in data sets Collaborate with project teams (solution architects, business, QA, and project management) to ensure solutions meet business objectives and fall within timelines and acceptance criteria Advanced knowledge of application, data, and infrastructure architecture disciplines Experience monitoring, troubleshooting, and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks. Expected Technical Skills: 8+ years of experience coding in Java with solid CS fundamentals including data structure and algorithm design. At least 2 years of hands-on implementation experience working with a combination of any of the three technologies from the following: KAFKA, KAFKA Streams, Flink/ AWS Kinesis, Scala, Spark, Python, EMR, Zookeeper, & Shell Scripting. Strong experience in two of the following SQL and NoSQL Databases: MySQL, Oracle, MongoDB, Elasticsearch, Hive, Druid, Cassandra, & Redis. Real-time streaming distributed data processing using Apache Flink, Spark, and KAFKA streams will be added advantage. Demonstrate substantial depth of knowledge and experience in Java, Data Streaming, Big Data, Flink, Spark, Scala, Python, Kubernetes, and wrangling of various data formats like Parquet and JSON. Hands-on experience with Linux and administration.
foundit
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.