Skip to main navigation Skip to search Skip to main content

Explainable GMDH-type neural networks for decision making: case of medical diagnostics

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
1 Downloads (Pure)

Abstract

In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often produce complex, not transparent models that limit interpretability, particularly in medical contexts where transparency is essential. Existing methods, such as decision trees and random forests, provide some interpretability but struggle with inconsistent medical data and fail to adequately quantify decision uncertainty. This paper introduces a novel Group Method of Data Handling (GMDH)-type neural network approach that addresses these gaps by generating concise, interpretable decision models based on the self-organizing concept. The proposed method builds multilayer networks using two-argument logical functions, ensuring explainability and minimizing the negative impact of human intervention. The method employs a selection criterion to incrementally grow networks, optimizing complexity while reducing validation errors. The algorithm’s convergence is proven through a bounded, monotonically decreasing error sequence, ensuring reliable solutions. Having been tested in complex diagnostic cases, including infectious endocarditis, systemic red lupus, and postoperative outcomes in acute appendicitis, the method achieved high expert agreement scores (Fleiss’s kappa of 0.98 (95% CI 0.97-0.99) and 0.86 (95% CI 0.83-0.89), respectively) compared to random forests (0.84 and 0.71). These results demonstrate statistically significant improvements (
Original languageEnglish
Article number113607
JournalApplied Soft Computing
Volume182
DOIs
Publication statusPublished - 23 Jul 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Group Method of Data Handling
  • artificial neural networks (ANN)
  • explainable decision models
  • Uncertainty
  • Group method of data handling
  • Artificial neural network
  • Explainable decision models

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Explainable GMDH-type neural networks for decision making: case of medical diagnostics'. Together they form a unique fingerprint.

Cite this