| Position Title | Automation Test Engineer – AI/ML Output Validation (1 Position) |
| Salary Grade | Alpha numeric code (if any) |
| Job Position Summary | We are seeking a highly skilled Automation Test Engineer to design, build, and maintain automated testing frameworks that validate AI/ML-generated outputs for source-code internationalization (i18n) issue detection. The candidate will be responsible for ensuring accuracy, consistency, and reliability of model predictions and generated remediation suggestions across multiple programming languages. This role combines strong automation expertise with a deep understanding of software testing methodologies, source-code behavior, and AI/ML validation practices. |
| Key responsibilities | - Develop automated test scripts to validate AI/ML outputs related to i18n issue detection and recommended code fixes.
- Build scalable test frameworks to evaluate model precision, recall, F1 score, and false-positive/false-negative behaviors.
- Create automated pipelines for regression testing of updated AI/ML models.
- Design and execute test plans, test cases, and acceptance criteria for multi-language codebases (e.g., Ruby, Python, TypeScript, Kotlin/Swift).
- Validate correctness, safety, and quality of AI-generated code remediation suggestions.
- Compare AI predictions against expected outputs using rule-based, static-analysis, and human-verified datasets.
- Integrate automated tests into CI/CD pipelines to ensure continuous model evaluation.
- Log defects, inconsistencies, and model deviations using standardized workflows.
- Collaborate closely with AI/ML Engineers, Static Analysis Engineers, and Software Developers to improve model accuracy and test coverage.
- Maintain documentation for test frameworks, testing tools, and processes.
|
| Desired Qualification / certifications | - Bachelor’s or master’s degree in computer science, Software Engineering, Information Systems, or related fields.
- Certifications in Software Testing, Automation Tools, or ISTQB preferred.
- Training in AI/ML fundamentals is a plus.
|
| Preferred relevant work experience | - 3–4+ years of experience in automation testing or QA engineering.
- Hands-on experience testing systems that produce AI/ML outputs, text generation, or rule-based predictions.
- Experience working with or testing static code analyzers, linters, or compilers is strongly preferred.
- Previous work validating code correctness or scanning tools (e.g., ESLint, Pylint) is advantageous.
|
| Skills and knowledge | Technical Skills - Strong programming skills in Python or similar languages for automation scripting.
- Experience with test automation frameworks (PyTest, or custom frameworks).
- Ability to build automated evaluation harnesses comparing expected vs. actual AI outputs.
- Familiarity with source-code structures (ASTs, syntax patterns, code smells, linting rules).
- Understanding i18n concepts, localization patterns, and common i18n coding issues.
- Proficiency with GitHub/GitLab CI/CD for automated test execution workflows.
- Experience working with APIs, JSON data validation, and structured output comparisons.
- Strong debugging and problem-solving skills in multi-language codebases.
General Skills - Excellent attention to detail and commitment to accuracy.
- Strong analytical thinking and ability to identify subtle model or logic inconsistencies.
- Effective communication skills for collaborating across AI, engineering, and QA teams.
- Ability to work independently and in fast-paced, iterative development cycles.
|
| Position reports to | Program Manager |
| Position supervises | Nil |
| Travel needed | Nil |
| Location | Fidel Softech, Pune office, Or client offices in India or abroad |
| Employment details | Full time and on the rolls of the Fidel Softech |