R.I.P.
๐ป
Ghosted
Risk-based test framework for LLM features in regulated software
January 24, 2026 ยท Grace Period ยท ๐ David C. Wyld et al. Eds CSML, AISCA, DNLP, SOEA, NET, BDHI, SIPO 2026 pp. 107-124, 2026. CS & IT CSCP 2026
Authors
Zhiyin Zhou
arXiv ID
2601.17292
Category
cs.SE: Software Engineering
Citations
0
Venue
David C. Wyld et al. Eds CSML, AISCA, DNLP, SOEA, NET, BDHI, SIPO 2026 pp. 107-124, 2026. CS & IT CSCP 2026
Abstract
Large language models are increasingly embedded in regulated and safety-critical software, including clinical research platforms and healthcare information systems. While these features enable natural language search, summarization, and configuration assistance, they introduce risks such as hallucinations, harmful or out-of-scope advice, privacy and security issues, bias, instability under change, and adversarial misuse. Prior work on machine learning testing and AI assurance offers useful concepts but limited guidance for interactive, product-embedded assistants. This paper proposes a risk-based testing framework for LLM features in regulated software: a six-category risk taxonomy, a layered test strategy mapping risks to concrete tests across guardrail, orchestration, and system layers, and a case study applying the approach to a Knowledgebase assistant in a clinical research platform.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Software Engineering
R.I.P.
๐ป
Ghosted
GraphCodeBERT: Pre-training Code Representations with Data Flow
R.I.P.
๐ป
Ghosted
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars
R.I.P.
๐ป
Ghosted
Microservices: yesterday, today, and tomorrow
R.I.P.
๐ป
Ghosted
Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks
R.I.P.
๐ป
Ghosted