Who We Are
At HMH we are a learning company. Our learning platform and solutions help millions of learners to dream big and explore their potential. When you work at HMH, you know that what you do truly has a transformative lifelong impact on people. Over fifty-three million students and teachers use our learning platforms.
Primary Responsibilities
Develop and execute test plans and test cases to validate the functionality, accuracy, and performance of large language models (LLMs) applications across different use cases and scenarios.
Employ a variety of testing techniques including input fuzzing, adversarial testing, and bias detection to ensure robustness and inclusivity.
Design and conduct comparative analyses to evaluate different LLM prompt template designs, prompt strategies, and user interactions, ensuring the most effective and user-friendly implementations are identified.
Work closely with cross-functional teams, including developers, data scientists, and product managers, to understand requirements and provide input on design, implementation, and testing strategies.
Lead AI safety and responsibility initiatives, ensuring all LLM applications comply with ethical AI principles, data privacy standards, and regulatory requirements.
Serve as an internal advocate, educating and advising the team on best practices for ethical AI development and deployment.
Develop and maintain metrics and monitoring systems to assess and continuously improve the quality and safety of LLM applications in real-world scenarios.
Perform extensive regression, load, and performance testing using Playwright and Gatling, ensuring the application's scalability, responsiveness, and optimal function, particularly for LLM components.
Identify, document, and prioritize issues, bugs, and inconsistencies in LLM behavior.
Leverage debugging tools and techniques to troubleshoot issues related to model predictions and data processing, ensuring timely resolution and mitigation.
Ensure the integrity, accuracy, and efficiency of data processing from ingestion to model inference, addressing potential data quality and pipeline scalability issues.
Keep abreast of the latest developments in AI ethics, safety research, NLP, ML research, technologies, and quality assurance methodologies.
Integrate cutting-edge practices and knowledge into QA processes to enhance testing methodologies and approaches, including appropriate application of automation and other QA tools to enhance productivity.
Communicate test results, findings, and recommendations effectively to stakeholders, providing clear and actionable feedback for improving LLM application quality and performance.
Conduct user acceptance testing (UAT) sessions with stakeholders and end-users when required.
Communicate effectively with cross-functional teams to clarify requirements and resolve issues.
Participate in Agile ceremonies, including daily stand-ups, sprint planning, and retrospectives.
What we are looking for
We are seeking experienced and motivated candidates for the AI Team at HMH. Candidates must have a genuine interest in honing their software quality craftsmanship and skills, in building great products proven out by automated testing suites, and in expanding and contributing to our quality culture.
Education & Experience
Bachelor's degree in computer science, Information Technology, or a related field
Required Qualifications:
5-8 years of experience in mixed manual and automation testing in product-based environments
Strong experience with database solutions, including writing SQL queries and performing database testing
Strong understanding of the software development life cycle (SDLC) and testing life cycle (STLC)
Proficiency in designing, executing, and reporting on test plans, test cases, and test scenarios
Solid experience in Jira, managing work items, creating and closing bug tickets, and reporting
Solid understanding of working with logging and tracking tools, such as Datadog
Excellent analytical and problem-solving skills, with the ability to debug and troubleshoot issues
Basic understanding of cloud-based computing (AWS or Azure) and cloud infrastructure designs
Experience with various automated testing tools and frameworks (Playwright and Gatling preferred, but Selenium, Cypress, JMeter, K6, Postman, Swagger, and other QA tools experience might be helpful)
The ability to design, write (code), and manage new test cases for dynamic workflows
Understanding the appropriate use of TDD/BDD and GWT/AAA methods of test design and test tools is helpful
Motivated and willing to execute intensive manual testing and track results during design phases until the automation testing solutions can be created
Ability to create documentation and best practices/standards manuals for the testing solutions being designed and developed
Skilled in reading, reviewing, and able to write or update code in languages like Java, Python, JavaScript/TypeScript, or other functional or scripting languages
Effective communication and collaboration skills, with the ability to work effectively in a cross-functional team environment and communicate complex technical concepts to non-technical stakeholders
Self-motivated and proactive, with a passion for quality and continuous improvement in testing methodologies and approaches, and specifically for testing AI and ML implementations
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.