Associate Iii Data Engineering

Year    MH, IN, India

Job Description

3 - 5 Years
1 Opening
Pune


### Role description


Role Proficiency:



This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles.



Outcomes:



• Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data sources.Design develop and maintain data pipelines that collect process and transform large volumes of data from various sources.


• Implement ETL (Extract Transform Load) processes to facilitate efficient data movement and transformation.


• Integrate data from multiple sources including databases APIs cloud services and third-party data providers.


• Establish data quality checks and validation procedures to ensure data accuracy completeness and consistency.


• Develop and manage data storage solutions including relational databases NoSQL databases and data lakes.Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools.



Measures of Outcomes:



• Adherence to engineering processes and standards


• Adherence to schedule / timelines


• Adhere to SLAs where applicable


• # of defects post delivery


• # of non-compliance issues


• Reduction of reoccurrence of known defects


• Quickly turnaround production bugs


• Completion of applicable technical/domain certifications


• Completion of all mandatory training requirements


• Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times).


• Average time to detect respond to and resolve pipeline failures or data issues.



Outputs Expected:



Code Development:



• Develop data processing code independently

ensuring it meets performance and scalability requirements.




Documentation:



• Create documentation for personal work and review deliverable documents

including source-target mappings

test cases

and results.




Configuration:



• Follow configuration processes diligently.




Testing:



• Create and conduct unit tests for data pipelines and transformations to ensure data quality and correctness.


• Validate the accuracy and performance of data processes.




Domain Relevance:



• Develop features and components with a solid understanding of the business problems being addressed for the client.


• Understand data schemas in relation to domain-specific contexts

such as EDI formats.




Defect Management:



• Raise

fix

and retest defects in accordance with project standards.




Estimation:



• Estimate time

effort

and resource dependencies for personal work.




Knowledge Management:



• Consume and contribute to project-related documents

SharePoint

libraries

and client universities.




Design Understanding:



• Understand design and low-level design (LLD) and link it to requirements and user stories.




Certifications:



• Obtain relevant technology certifications to enhance skills and knowledge.



Skill Examples:



• Proficiency in SQL Python or other programming languages utilized for data manipulation.


• Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.


• Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).


• Conduct tests on data pipelines and evaluate results against data quality and performance specifications.


• Experience in performance tuning data processes.


• Proficiency in querying data warehouses.



Knowledge Examples:



Knowledge Examples



• Knowledge of various ETL services provided by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow and Azure ADF/ADLF.


• Understanding of data warehousing principles and practices.


• Proficiency in SQL for analytics including windowing functions.


• Familiarity with data schemas and models.


• Understanding of domain-related data and its implications.



Additional Comments:



Seeking a Big Data Engineer with 4-6 years of experience to play a key role in building, managing, and evolving our big data pipelines. You'll leverage your data engineering expertise to not only onboard customers but also contribute to critical initiatives that advance our technology stack. This role offers the opportunity to directly improve product capabilities while expanding your skill set with cutting-edge technologies. Responsibilities:- Collaborate closely with Technical Leaders to devise and build the right solutions. Participate in design discussions and brainstorming sessions to select, integrate, and maintain tools and frameworks required to solve Big Data problems. Implement/Maintain systems to cleanse, process, and analyze large data sets using distributed processing tools like Airflow and Spark. Ability to learn quickly and adapt to new technologies Experience working with software and support teams in multiple regions Contribute insights and suggestions as part of our continuous improvement. Ability to work effectively in a fast-paced, collaborative environment. Required Qualifications 4 to 6 years of experience in Data engineering. Strong computer science background and knowledge of software and product development methodologies. In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Hadoop. and the file types they deal with. Experience with ETL and Data pipeline orchestration tools like Apache Airflow, dbt, etc. Excellent coding skills in Python, Java or Scala, SQL. Experience with Git and build tools like Gradle/Maven/SBT. Experience/Understanding with data warehouse platforms. Experience working on cloud platforms (like AWS,GCP,Azure) Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. Excellent communication and collaboration skills


### Skills


java,Microservices,Python,Big Data



### About UST


UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact--touching billions of lives in the process.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3578803
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    MH, IN, India
  • Education
    Not mentioned
  • Experience
    Year