Spark Atse

Year    Remote, IN, India

Job Description

Mission

As a Spark Technical Solutions Engineer, you will provide a deep dive technical and consulting

related solutions for the challenging Spark/ML/AI/Delta/Streaming/Lakehouse reported issues

by our customers and resolve any challenges involving the Databricks unified analytics

platform with your highly comprehensive technical and customer communication skills. You

will assist our customers in their Databricks journey and provide them with the guidance,

knowledge, and expertise that they need to realize value and achieve their strategic objectives

using our products.

Outcomes

? Performing initial level analysis and troubleshooting issues in Spark using Spark UI

metrics, DAG, Event Logs for various customer reported job slowness issues.

? Troubleshoot, resolve and suggest deep code-level analysis of Spark to address

customer issues related to Spark core internals, Spark SQL, Structured Streaming,

Delta, Lakehouse and other databricks runtime features.

? Assist the customers in setting up reproducible spark problems with solutions in the

areas of Spark SQL, Delta, Memory Management, Performance tuning, Streaming, Data

Science, Data Integration areas in Spark.

? Participate in the Designated Solutions Engineer program and drive one or two of

strategic customer?s day to day Spark and Cloud issues.

? Plan and coordinate with Account Executives, Customer Success Engineers and

Resident Solution Architects for coordinating the customer issues and best practices

guidelines.

? Participate in screen sharing meetings, answering slack channel conversations with

our internal stakeholders and customers, helping in driving the major spark issues at

an individual contributor level.

? Build an internal wiki, knowledge base with technical documentation, manuals for the

support team and for the customers. Participate in the creation and maintenance of

company documentation and knowledge base articles.

? Coordinate with Engineering and Backline Support teams to provide assistance in

identifying, reporting product defects.

? Participate in weekend and weekday on-call rotation and run escalations during

databricks runtime outages, incident situations, ability to multitask and plan day 2 day

activities and provide escalated level of support for critical customer operational

issues, etc.

? Provide best practices guidance around Spark runtime performance and usage of

Spark core libraries and APIs for custom-built solutions developed by Databricks

customers.

? Be a true proponent of customer advocacy.

? Contribute in the development of tools/automation initiatives.

? Provide front line support on the third party integrations with Databricks environment.

? Review the Engineering JIRA tickets and proactively intimate the support leadership

team for following up on the action items.

? Manage the assigned spark cases on a daily basis and adhere to committed SLA's.

? Achieving above and beyond expectations of the support organization KPIs.

? Strengthen your AWS/Azure and Databricks platform expertise through continuous

learning and internal training programs.

Competencies

? Min 6 years of experience in designing, building, testing, and maintaining

Python/Java/Scala based applications in typical project delivery and consulting

environments.

? 3 years of hands-on experience in developing any two or more of the Big Data,

Hadoop, Spark,Machine Learning, Artificial Intelligence, Streaming, Kafka, Data

Science, ElasticSearch related industry use cases at the production scale. Spark

experience is mandatory.

? Hands on experience in the performance tuning/troubleshooting of Hive and Spark

based applications at production scale.

? Proven and real time experience in JVM and Memory Management techniques such as

Garbage collections, Heap/Thread Dump Analysis is preferred.

? Working and hands-on experience with any SQL-based databases, Data

Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL

Server, MySQL and SCD type use cases is preferred.

? Hands-on experience with AWS or Azure or GCP is preferred

? Excellent written and oral communication skills

? Linux/Unix administration skills is a plus

? Working knowledge in Data Lakes and preferably on the SCD types use cases at

production scale.

? Demonstrated analytical and problem-solving skills, particularly those that apply to a

"Distributed Big Data Computing" environment.

(ii) - Lead Spark TSE (L6)

Mission

As a Lead of the Spark Technical Solutions team, you will lead a team of Technical solution

engineers and be responsible for driving deep dive technical solutions for any issues reported

by Databricks customers. We expect the technical lead to resolve the technical challenges

with comprehensive technical and customer communication skills. You will assist our

customers in their Databricks journey and provide them with the guidance, knowledge, and

expertise that they need to realize value and achieve their strategic objectives using our

products.

Outcomes

? As a Lead and member of the technical solutions team, you will be directly responsible

for leading and driving technical solutions for the problems reported by Databricks

customers.

? Perform initial level analysis and troubleshooting issues in Spark using Spark UI

metrics, DAG, Event Logs for various customer reported job slowness issues.

? Troubleshoot, resolve and suggest deep code-level analysis of Spark to address

customer issues related to Spark core internals, Spark SQL, Structured Streaming,

Delta, Lakehouse and other databricks runtime features.

? Assist the customers in setting up reproducible spark problems with solutions in the

areas of Spark SQL, Delta, Memory Management, Performance tuning, Streaming, Data

Science, Data Integration areas in Spark.

? Contribute in the development of tools/automation initiatives.

? Responsible for directly monitoring, reporting, and driving improvements to

team-level metrics and KPIs, acting as an escalation point with customers and internal

teams, and optimizing and developing support processes and tools.

? Responsible for working across multiple cross functional teams that include

Engineering, product management, sales and customer success; manage Hiring,

mentoring and onboarding new support engineers.

? Regularly meet one-on-one with your direct reports, conducting annual reviews

and career development discussions throughout the year

? Be hands-on to assist the team members in resolving issues related to Spark core

internals, Spark SQL, Structured Streaming, Delta, Lakehouse and other databricks

runtime features.

? Manage and drive best practices guidance around Spark runtime performance and

usage of Spark core libraries and APIs for custom-built solutions developed by

Databricks customers; contribute in the development of tools/automation initiatives.

? Own Engineering JIRA tickets and proactively work to bring quicker resolutions to

customer reported issues; participate in creation of knowledge base articles.

? Participate in weekend and weekday on-call rotation and run escalations during

databricks runtime outages, incident situations, ability to multitask and plan day 2 day

activities and provide escalated level of support for critical customer operational

issues, etc.

Competencies

? Min 8 - 12 years of experience in designing, building, testing, and maintaining

Python/Java/Scala/Spark based applications in a typical project delivery and

consulting environments.

? 3 years of hands-on experience in developing and leading any two or more of the Big

Data, Hadoop, Spark,Machine Learning, Artificial Intelligence, Streaming, Kafka, Data

Science, ElasticSearch related industry use cases at the production scale. Spark

experience is mandatory.

? Hands on experience in the performance tuning/troubleshooting of Hive and Spark

based applications at production scale. Real time experience in JVM and Memory

Management techniques such as Garbage collections, Heap/Thread Dump Analysis is

preferred.

? Working and hands-on experience with Data lakes and any SQL-based databases, Data

Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL

Server, MySQL is preferred.

? Hands-on experience with AWS or Azure or GCP is preferred;

? Experience in implementing CI/CD, Monitoring/alerting for Production Systems.

Technical lead in design, implementation and support of large scale data and analytics

solutions that are highly reliable, flexible, and scalable.

? Experience in leading and driving end-to-end projects and have reported and

escalated to top levels.

? Experience in leading teams in an organization involving multiple reporting lines.

? Strong written and verbal communication skills; very good analytical, organizational,

multi-tasking skills.

(iii) - Spark Technical Solutions Analyst (L2)

Mission

As a Technical Solutions Analyst, you will provide technical support for Spark and Cloud

Platform related issues and resolve any challenges involving the Databricks unified analytics

platform. You will assist our customers in their Databricks journey and provide them with the

guidance, knowledge, and expertise that they need to realize value and achieve their strategic

objectives using our products.

Outcomes

? Performing initial analysis and troubleshooting issues in product deployment, data

migration, workspace recovery, Networking setup, IAM functionalities of Databricks

AWS/Azure Cloud environments.

? Performing initial analysis and troubleshooting issues in Spark using Spark UI metrics,

DAG, Event Logs for various customer reported job slowness issues.

? Provide proactive solutions/suggestions for the Spark/Platform library integration

issues.

? Assist the technical support team members in setting up cloud resources, notebook

environment, cloud labs, test bed, log downloads setup, networking for reproducing

Spark/Platform issues.

? Experiment Private Preview features (Spark/Platform) with the help of the technical

leadership team and document the test results.

? Provide initial level of analysis around third party integrations with Databricks

environment.

? Facilitate screen sharing meetings, slack channel conversations with our internal

stakeholders and customers, helping in pulling affected customer subscription ids and

assisting in documenting the incident summary for the Incident Management team.

? Review Support team JIRA tickets and proactively intimate the support leadership

team for following up on the action items.

? Assist in the development of tools/automation initiatives.

? Manage Customer Support Cases on a daily basis and adhere to committed SLA's.

? Build an internal wiki, knowledge base with technical documentation, manuals for the

support team and for the customers.

? Coordinate with Engineering teams to provide assistance in identifying, reporting

product defects.

? Participate in the creation and maintenance of company documentation and

knowledge articles

? Be a true proponent of customer advocacy

? Strengthen your AWS/Azure and Databricks platform expertise through continuous

learning and internal training programs.

Competencies

? 0-2 years of experience in designing, building, testing, and maintaining

Python/Java/Scala based applications.

? 3-6 months of Internship experience in Big Data, Artificial Intelligence, Data Science,

Product Development, Growth, etc areas at reputed companies is preferred.

? Proven experience in a real time project or an academic project on the subject areas

such as Hadoop/Spark/ML/Deep Learning/AI/Robotics/NLP/Cloud Computing is

preferred.

? Hands-on project experience in Data Structures, Soware Algorithms, Operating

Systems, Distributed Computing areas.

? Conceptual understanding and hands-on experience with any SQL-based database,

Datawarehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL

Server, MySQL is preferred.

? Good Linux/Unix administration skills is preferred

? Basic understanding or a good hands-on project experience with Hadoop/Spark

Ecosystems in either an administrative, development, or support role is preferred.

? Hands-on experience with AWS or Azure is preferred

? Excellent written and oral communication skills

? Demonstrated analytical and problem-solving skills, particularly those that apply to a

"Distributed Big Data Computing" environment.

Here is your Smart Chat (Ctrl+Space)

Job Type: Contractual / Temporary
Contract length: 6 months

Pay: ?150,000.00 - ?170,000.00 per month

Schedule:

• Day shift
• Monday to Friday

Experience:

• total work: 6 years (Required)

Work Location: Remote

Application Deadline: 16/12/2024
Expected Start Date: 14/12/2024

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3556176
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Remote, IN, India
  • Education
    Not mentioned
  • Experience
    Year