:
Overall Purpose: This career step requires career level experience. Responsible for the performance of any/all functions involved in the development and/or maintenance of new or existing applications. Key Roles and Responsibilities: Using programming language and technology, writes code, completes programming and documentation, and performs testing and debugging of applications. Analyzes, designs, programs, debugs and modifies software enhancements and/or new products used in local, networked, or Internet-related computer programs. May interact with users to define system requirements and/or necessary modifications.
Job responsibilities include:
• Performing Databricks, Data pipelines and DB Monitoring and Response
• Database Problem Management/Resolution - both VM based and Azure PaaS
• Change & Configuration Management, Implementation of change, Deployment, Performance Tuning of Databases and Azure Databricks, Spark Jobs and Clusters
• Scheduled maintenance of Clusters & topology changes in line with business requirements
• Storage account (Data Lake, Blob, File Share) maintenance and housekeeping
• Service Assurance Management, Status, and Metrics collection/reporting
• Certificate renewal and regular updating of DB credentials
•Key Roles & Responsibilities •
FAULT MANAGEMENT - Technical support and troubleshoot Kepler platform & service issues including:
• alarm and KPI monitoring (proactive and reactive).
• investigation/diagnose ETL, Data streaming pipelines & Database issues.
• triage and communicate event status and coordinate RCA.
• mitigate/remediate tier2 technology fault conditions that span multiple production clusters.
• implement restoration of Database, Event Streaming and Data transformation workflows.
• may involve in the implementation of fixes or design changes.
• solves incidents and Call to Work (using PagerDuty) independently and works with additional support teams, engineering and vendors as needed.
• proactively monitor and maintain configurations to achieve designed performance and reliability levels using key performance indicators (KPIs).
CHANGE MANAGEMENT
=====================
• Manage, change and track ETL and Data streaming pipelines on Spark and Azure Databricks
• Manage, coordinate, track, report on any activities done in production teams.
• Create AOTS-CM, ensure activities in production are within GTOC compliancy.
• Manage maintenance windows, working with team to implement desired changes.
• Own and manage the change management process for activities on Kepler platform, understand existing processes.
TECHNOLOGY INSERTION AND PRODUCT DEVELOPMENT
================================================
• Partnering with Labs, product development, and engineering teams to create process documentation, methods, procedures, tools and guidelines for current Kepler roadmap, capacity management and create/execute use/test cases/Synthetic testing during new technology instantiation.
• Participates in collaboration sessions with engineering and vendors on product capabilities and improvements.
• Reviews and interprets technical documentation and specs for inclusion/adaptation into database templates.
• May interact virtually explaining root cause of issues if any.
• Collaborate with upstream and downstream partners to conduct change management procedures.
• Develop applications, scripts, and act as system DBA (database administration), building knowledge database SharePoint and internal messaging BOTs while maintaining other key critical software/data platforms.
• Knowledge of and the ability to use tools and techniques for analyzing and documenting logical relationships among logs, traces, data, processes or events translating business problems into insights.
• Ensure that the environments where Kepler DBs and Data pipelines are deployed stay continuously in compliance, with no security issue (i.e., implement/deploy security fixes, version upgrades, renew certificates on time, etc.)
• Ensure 24x7x365 availability of platform. Flexible to work in 3 rotational India shifts covering morning, afternoon & night with monthly rotation.
AUTOMATION
==============
• Perform automation functions through scripting and programming to bring efficiency to support and monitoring functions.
• Perform data analysis on platform and or organizational metrics using industry standard analytics and data presentation tools.
• Enhance existing pro-active monitoring and issue detections functionalities associated to Kepler.
• Automate scheduled start up and shutdown of services basis business needs.
•Education •
Preferred bachelor's degree in computer sciences, Engineering or Operations
•Preferred Experience: • 8-12 years
•Must Have Certifications: • DP-900 (Azure Data Fundamentals)
•Preferred Certifications: • DP-203 (Data Engineering on Azure) / DP-300 (Azure Database Administrator)
•Required Skills (Keywords): • DP-203, DP-300, DP-900, Azure Databricks, Data Pipelines, Data Streaming, ETL, Spark, Data Processing, Azure Data Lake, NoSQL, Cosmos DB, Solr, Azure SQL, Data warehousing, EventHubs, Cassandra
Weekly Hours:
40 Time Type:
Regular Location:
Hyderabad, Andhra Pradesh, India
It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.