Skip to main content
Information Processing and ManagementVolume 61, Issue 5, September 2024, Article number 103778

Interpretable software estimation with graph neural networks and orthogonal array tunning method(Article)(Open Access)

  Save all to author list
  • aDepartment of Cognitive Science and Artificial Intelligence, Tilburg University, Warandelaan 2, North-Brabant, Tilburg, 5037 AB, Netherlands
  • bDepartment of Mathematics, Informatics and Statistics, University of Business Academy in Novi Sad, Dusana Popovica 21, Nis, 18 000, Serbia
  • cDepartment of Mathematics and Informatics, University of Novi Sad, Trg Dositeja Obradovica 4, Novi Sad, 21 000, Serbia
  • dFaculty of Health and Business studies Valjevo, Singidunum University, Zeleznicka 5, Valjevo, 14 000, Serbia

Abstract

Software estimation rates are still suboptimal regarding efficiency, runtime, and the accuracy of model predictions. Graph Neural Networks (GNNs) are complex, yet their precise forecasting reduces the gap between expected and actual software development efforts, thereby minimizing associated risks. However, defining optimal hyperparameter configurations remains a challenge. This paper compares state-of-the-art models such as Long-Short-Term-Memory (LSTM), Graph Gated Neural Networks (GGNN), and Graph Gated Sequence Neural Networks (GGSNN), and conducts experiments with various hyperparameter settings to optimize performance. We also aim to gain the most informative feedback from our models by exploring insights using a post-hoc agnostic method like Shapley Additive Explanations (SHAP). Our findings indicate that the Taguchi orthogonal array optimization method is the most computationally efficient, yielding notably improved performance metrics. This suggests a compromise between computational efficiency and prediction accuracy while still requiring the lowest number of runnings, with an RMSE of 0.9211 and an MAE of 310.4. For the best-performing model, the GGSNN model, within the Constructive Cost Model (COCOMO), Function Point Analysis (FPA), and Use Case Points (UCP) frameworks, applying the SHAP method leads to a more accurate determination of relevance, as evidenced by the norm reduction in activation vectors. The SHAP method stands out by exhibiting the smallest area under the curve and faster convergence, indicating its efficiency in pinpointing concept relevance. © 2024

Author keywords

Graph neural networksHyperparameter optimizationSHAPSoftware estimation

Indexed keywords

Engineering controlled terms:Activation analysisComputational efficiencyForecastingGraph neural networksSoftware design
Engineering uncontrolled termsGraph neural networksHyper-parameterHyper-parameter optimizationsNeural network arraysNeural-networksOrthogonal arrayRuntimesShapleyShapley additive explanationSoftware estimation
Engineering main heading:Long short-term memory
  • ISSN: 03064573
  • CODEN: IPMAD
  • Source Type: Journal
  • Original language: English
  • DOI: 10.1016/j.ipm.2024.103778
  • Document Type: Article
  • Publisher: Elsevier Ltd

  Rankovic, N.; Department of Cognitive Science and Artificial Intelligence, Tilburg University, Warandelaan 2, North-Brabant, Tilburg, Netherlands;
© Copyright 2024 Elsevier B.V., All rights reserved.

Cited by 2 documents

Lu, Z. , Sun, Y. , Yang, Z.
Improving generalization in DNNs through enhanced orthogonality in momentum-based optimizers
(2025) Information Processing and Management
Hariyanto , Marjuni, A. , Rijati, N.
Systematic Literature Review of Software Effort Estimation : Research Trends, Methods, and Datasets
(2024) Proceedings - 2024 International of Seminar on Application for Technology of Information and Communication: Smart And Emerging Technology for a Better Life, iSemantic 2024
View details of all 2 citations
{"topic":{"name":"Effort Estimation; Software Development; Machine Learning","id":1880,"uri":"Topic/1880","prominencePercentile":93.37604,"prominencePercentileString":"93.376","overallScholarlyOutput":0},"dig":"1bb9f0595a64ddeab22a1f07ad5604cdd4ccf0c1dd21cf89156fd20251718271"}

SciVal Topic Prominence

Topic:
Prominence percentile: