Advancements in Explainable Artificial Intelligence for Enhanced Transparency and Interpretability across Business Applications

Advancements in Explainable Artificial Intelligence for Enhanced Transparency
and Interpretability across Business Applications

Volume 9, Issue 5, Page No 09-20, 2024

Author’s Name:  Maikel Leon, Hanna DeSimone

View Affiliations

1 Department of Business Technology, Miami Herbert Business School, University of Miami, Miami, Florida, USA
2 Miami Herbert Business School, University of Miami, Miami, Florida, USA

a)whom correspondence should be addressed. E-mail: maikelleon@gmail.com

Adv. Sci. Technol. Eng. Syst. J. 9(5), 09-20 (2024); a  DOI: 10.25046/aj090502

Keywords: Explainable AI, Transparency, Interpretability

Share

47 Downloads

Export Citations

This manuscript offers an in-depth analysis of Explainable Artificial Intelligence (XAI), em- phasizing its crucial role in developing transparent and ethically compliant AI systems. It traces AI’s evolution from basic algorithms to complex systems capable of autonomous de- cisions with self-explanation. The paper distinguishes between explainability—making AI decision processes understandable to humans—and interpretability, which provides coherent reasons behind these decisions. We explore advanced explanation methodologies, including feature attribution, example-based methods, and rule extraction technologies, emphasizing their importance in high-stakes domains like healthcare and finance. The study also reviews the current regulatory frameworks governing XAI, assessing their effectiveness in keeping pace with AI innovation and societal expectations. For example, rule extraction from artificial neural networks (ANNs) involves deriving explicit, human-understandable rules from complex models to mimic explainability, thereby making the decision-making process of ANNs transparent and accessible. Concluding, the paper forecasts future directions for XAI research and regulation, advocating for innovative and ethically sound advancements. This work enhances the dialogue on responsible AI and establishes a foundation for future research and policy in XAI.

Received: 23 July 2024  Revised: 14 September 2024  Accepted: 15 September 2024  Online: 20 September 2024

  1. V. K. Michael Chui, B. McCarthy, “An Executives Guide to AI,” McKinsey & Company, 2018.
  2. W. Samek, T. Wiegand, K.-R. Mu¨ller, “Explainable artificial intelligence: Un- derstanding, visualizing and interpreting deep learning models,” arXiv preprint arXiv:1708.08296, 2017, doi:10.48550/arXiv.1708.08296.
  3. “Unlocking the black box with explainable AI – Infosys,” Infosys, 2019.
  4. M. Leon, “Aggregating Procedure for Fuzzy Cognitive Maps,” The International FLAIRS Conference Proceedings, 36(1), 2023, doi:10.32473/flairs.36.133082.
  5. R. Confalonieri, L. Coba, B. Wagner, T. R. Besold, “A historical perspective of explainable Artificial Intelligence,” Wiley Interdisciplinary Reviews: Data Min- ing and Knowledge Discovery, 11(1), e1391, 2021, doi:10.1002/widm.1391.
  6. M. Haenlein, A. Kaplan, “A brief history of artificial intelligence: On the past, present, and future of artificial intelligence,” California Management Review, 61(4), 5–14, 2019, doi:10.1177/0008125619864925.
  7. A. Abraham, “Rule-Based expert systems,” Handbook of Measuring System Design, 2005, doi:10.1002/9780470027325.s6405.
  8. T. G. Gill, “Early expert systems: Where are they now?” MIS Quarterly, 19(1), 51–81, 1995, doi:10.2307/249711.
  9. J. Kastner, S. Hong, “A review of expert systems,” European Journal of Opera- tional Research, 18(3), 285–292, 1984, doi:10.1016/0377-2217(84)90202-0.
  10. E. Struble, M. Leon, E. Skordilis, “Intelligent Prevention of DDoS Attacks us- ing Reinforcement Learning and Smart Contracts,” The International FLAIRS Conference Proceedings, 37(1), 2024, doi:10.32473/flairs.37.1.135349.
  11. P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, P. M. Atkinson, “Ex- plainable artificial intelligence: an analytical review,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5), e1424, 2021, doi:10.1002/widm.1424.
  12. M. Leon, “Fuzzy Cognitive Maps as a Bridge between Symbolic and Sub- symbolic Artificial Intelligence,” International Journal on Cybernetics & Infor- matics (IJCI). 3rd International Conference on Artificial Intelligence Advances (AIAD 2024), 13(4), 57–75, 2024, doi:10.5121/ijci.2024.130406.
  13. M. Leon, L. Mkrtchyan, B. Depaire, D. Ruan, K. Vanhoof, “Learning and clus- tering of fuzzy cognitive maps for travel behaviour analysis,” Knowledge and Information Systems, 39(2), 435–462, 2013, doi:10.1007/s10115-013-0616-z.
  14. M. Schemmer, N. Ku¨ hl, G. Satzger, “Intelligent decision assistance ver- sus automated decision-making: Enhancing knowledge work through ex- plainable artificial intelligence,” arXiv preprint arXiv:2109.13827, 2021, doi:10.48550/arXiv.2109.13827.
  15. D. Doran, S. Schulz, T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” arXiv preprint arXiv:1710.00794, 2017, doi:10.48550/arXiv.1710.00794.
  16. P. J. Phillips, C. A. Hahn, P. C. Fontana, D. A. Broniatowski, M. A. Przybocki, “Four principles of explainable artificial intelligence,” Gaithersburg, Maryland, 18, 2020, doi:10.6028/NIST.IR.8312.
  17. P. Bhattacharya, N. Ramesh, “Explainable AI: A Practical Perspective,” Infosys, 2020.
  18. J. Johnson, “Interpretability vs explainability: The black box of machine learning,” BMC Blogs, 2020.
  19. D. Minh, H. X. Wang, Y. F. Li, T. N. Nguyen, “Explainable artificial intelli- gence: a comprehensive review,” Artificial Intelligence Review, 55, 3503–3568, 2022, doi:10.1007/s10462-021-10088-y.
  20. A. Rai, “Explainable AI: From black box to glass box,” Journal of the Academy of Marketing Science, 48, 137–141, 2020, doi:10.1007/s11747-019-00710-5.
  21. C. Meske, E. Bunde, J. Schneider, M. Gersch, “Explainable Artificial Intelli- gence: Objectives, Stakeholders and Future Research Opportunities,” Informa- tion Systems Management, 2020, doi:10.1080/10580530.2020.1849465.
  22. S. S. Band, A. Yarahmadi, C.-C. Hsu, M. Biyari, M. Sookhak, R. Ameri, I. Dehzangi, A. T. Chronopoulos, H.-W. Liang, “Application of explain- able artificial intelligence in medical health: A systematic review of inter- pretability methods,” Informatics in Medicine Unlocked, 40, 101286, 2023, doi:10.1016/j.imu.2023.101286.
  23. A. F. Markus, J. A. Kors, P. R. Rijnbeek, “The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies,” Journal of Biomedical Informatics, 113, 103655, 2021, doi:10.1016/j.jbi.2020.103655.
  24. R. Goebel, A. Chander, K. Holzinger, F. Lecue, Z. Akata, S. Stumpf, P. Kiese- berg, A. Holzinger, “Explainable AI: the new 42?” in Machine Learning and Knowledge Extraction: Second IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG
    12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany, August 27–30, 2018, Proceedings 2, 295–303, Springer, 2018, doi:10.1007/978-3-319-99740-7 21.
  25. C. Oxborough, E. Cameron, A. Rao, A. Birchall, A. Townsend, C. West- ermann, “Explainable AI: Driving business value through greater under- standing,” Retrieved from PWC website: https://www.pwc.co.uk/audit- assurance/assets/explainable-ai.pdf, 2018.
  26. D. Shin, “The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI,” International Journal of Human- Computer Studies, 146, 102551, 2021, doi:10.1016/j.ijhcs.2020.102551.
  27. G. Na´poles, M. L. Espinosa, I. Grau, K. Vanhoof, R. Bello, Fuzzy cognitive maps based models for pattern classification: Advances and challenges, volume 360, 83–98, Springer Verlag, 2018.
  28. G. Na´poles, M. Leon, I. Grau, K. Vanhoof, “FCM Expert: Software Tool for Scenario Analysis and Pattern Classification Based on Fuzzy Cognitive Maps,” International Journal on Artificial Intelligence Tools, 27(07), 1860010, 2018, doi:10.1142/S0218213018600102.
  29. A. B. Arrieta, N. D´ıaz-Rodr´ıguez, J. Del Ser, A. Bennetot, S. Tabik, A. Bar- bado, S. Garc´ıa, S. Gil-Lo´pez, D. Molina, R. Benjamins, et al., “Explain- able Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, 58, 82–115, 2020, doi:10.1016/j.inffus.2019.12.012.
  30. D. Vale, A. El-Sharif, M. Ali, “Explainable artificial intelligence (XAI) post- hoc explainability methods: Risks and limitations in non-discrimination law,” AI and Ethics, 2, 815–826, 2022, doi:10.1007/s43681-022-00142-y.
  31. M. Xue, Q. Huang, H. Zhang, L. Cheng, J. Song, M. Wu, M. Song, “Pro- topformer: Concentrating on prototypical parts in vision transformers for interpretable image recognition,” arXiv preprint arXiv:2208.10431, 2022, doi:10.48550/arXiv.2208.10431.
  32. M. Leon Espinosa, G. Napoles Ruiz, “Modeling and Experimentation Frame- work for Fuzzy Cognitive Maps,” Proceedings of the AAAI Conference on Artificial Intelligence, 30(1), 2016, doi:10.1609/aaai.v30i1.9841.
  33. A. Adadi, M. Berrada, “Peeking inside the black-box: a survey on explain- able artificial intelligence (XAI),” IEEE Access, 6, 52138–52160, 2018, doi:10.1109/ACCESS.2018.2870052.
  34. V. G. Costa, C. E. Pedreira, “Recent advances in decision trees: An updated survey,” Artificial Intelligence Review, 56(5), 4765–4800, 2023, doi:10.1007/s10462-022-10275-5.
  35. J. F. Allen, S. Schmidt, S. A. Gabriel, “Uncovering Strategies and Commitment Through Machine Learning System Introspection,” SN Computer Science, 4(4), 322, 2023, doi:10.1007/s42979-023-01747-8.
  36. A. Heuillet, F. Couthouis, N. D´ıaz-Rodr´ıguez, “Explainability in deep reinforcement learning,” Knowledge-Based Systems, 214, 106685, 2021, doi:10.48550/arXiv.2008.06693.
  37. P. Sequeira, E. Yeh, M. T. Gervasio, “Interestingness Elements for Explainable Reinforcement Learning through Introspection.” in IUI workshops, volume 1, 2019, doi:10.48550/arXiv.1912.09007.
  38. X. Dai, M. T. Keane, L. Shalloo, E. Ruelle, R. M. Byrne, “Counterfactual explanations for prediction and diagnosis in XAI,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 215–226, 2022, doi:10.1145/3514094.3534144.
  39. M. T. Keane, E. M. Kenny, E. Delaney, B. Smyth, “If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques,” arXiv preprint arXiv:2103.01035, 2021, doi:10.48550/arXiv.2103.01035.
  40. G. Warren, M. T. Keane, R. M. Byrne, “Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI,” arXiv preprint arXiv:2204.10152, 2022, doi:10.48550/arXiv.2204.10152.
  41. G. Na´poles, Y. Salgueiro, I. Grau, M. Leon, “Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification,” IEEE Transactions on Cybernetics, 53(10), 6083–6094, 2023, doi:10.48550/arXiv.2107.03423.
  42. P. A. Moreno-Sanchez, “An automated feature selection and classification pipeline to improve explainability of clinical prediction models,” in 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI), 527–534, IEEE, 2021, doi:10.1109/ICHI52183.2021.00100.
  43. J. Tritscher, A. Krause, A. Hotho, “Feature relevance XAI in anomaly detection: Reviewing approaches and challenges,” Frontiers in Artificial Intelligence, 6, 1099521, 2023, doi:10.3389/frai.2023.1099521.
  44. J. Tritscher, M. Ring, D. Schlr, L. Hettinger, A. Hotho, “Evaluation of post-hoc XAI approaches through synthetic tabular data,” in Foundations of Intelligent Systems: 25th International Symposium, ISMIS 2020, Graz, Austria, Septem- ber 23–25, 2020, Proceedings, 422–430, Springer, 2020, doi:10.1007/978-3- 030-59491-6 40.
  45. G. Na´poles, F. Hoitsma, A. Knoben, A. Jastrzebska, M. Leon, “Prolog-based agnostic explanation module for structured pattern classification,” Information Sciences, 622, 1196–1227, 2023, doi:10.1016/j.ins.2022.12.012.
  46. Z. Yang, J. Liu, K. Wu, “Learning of Boosting Fuzzy Cognitive Maps Using a Real-coded Genetic Algorithm,” in 2019 IEEE Congress on Evolutionary Computation (CEC), 966–973, 2019, doi:10.1109/CEC.2019.8789975.
  47. W. Liang, Y. Zhang, X. Liu, H. Yin, J. Wang, Y. Yang, “Towards improved multifactorial particle swarm optimization learning of fuzzy cognitive maps: A case study on air quality prediction,” Applied Soft Computing, 130, 109708, 2022, doi:10.1016/j.asoc.2022.109708.
  48. Y. Hu, Y. Guo, R. Fu, “A novel wind speed forecasting combined model using variational mode decomposition, sparse auto-encoder and op- timized fuzzy cognitive mapping network,” Energy, 278, 127926, 2023, doi:10.1016/j.energy.2023.127926.
  49. W. Hoyos, J. Aguilar, M. Toro, “A clinical decision-support system for dengue based on fuzzy cognitive maps,” Health Care Management Science, 25(4), 666–681, 2022, doi:10.1007/s10729-022-09611-6.
  50. W. Hoyos, J. Aguilar, M. Toro, “PRV-FCM: An extension of fuzzy cogni- tive maps for prescriptive modeling,” Expert Systems with Applications, 231, 120729, 2023, doi:10.1016/j.eswa.2023.120729.
  51. G. Na´poles, I. Grau, L. Concepcio´n, L. K. Koumeri, J. P. Papa, “Modeling implicit bias with fuzzy cognitive maps,” Neurocomputing, 481, 33–45, 2022.
  52. K. Poczeta, E. I. Papageorgiou, “Energy Use Forecasting with the Use of a Nested Structure Based on Fuzzy Cognitive Maps and Artificial Neural Networks,” Energies, 15(20), 7542, 2022, doi:10.3390/en15207542.
  53. G. D. Karatzinis, N. A. Apostolikas, Y. S. Boutalis, G. A. Papakostas, “Fuzzy Cognitive Networks in Diverse Applications Using Hybrid Representative Structures,” International Journal of Fuzzy Systems, 25(7), 2534–2554, 2023, doi:10.1007/s40815-023-01564-4.
  54. O. Orang, P. C. de Lima e Silva, F. G. Guimara˜es, “Time series forecasting using fuzzy cognitive maps: a survey,” Artificial Intelligence Review, 56, 7733–7794, 2023, doi:10.1007/s10462-022-10319-w.
  55. M. Leon, “Business Technology and Innovation Through Problem-Based Learning,” in Canada International Conference on Education (CICE-2023) and World Congress on Education (WCE-2023), Infonomics Society, 2023, doi:10.20533/cice.2023.0034.
  56. E. Jiya, O. Georgina, A. O., “A Review of Fuzzy Cognitive Maps Extensions and Learning,” Journal of Information Systems and Informatics, 5(1), 300–323, 2023, doi:10.51519/journalisi.v5i1.447.
  57. R. Schuerkamp, P. J. Giabbanelli, “Extensions of Fuzzy Cognitive Maps: A Systematic Review,” ACM Comput. Surv., 56(2), 53:1–53:36, 2023, doi:10.1145/3610771.
  58. S. Yang, J. Liu, “Time-Series Forecasting Based on High-Order Fuzzy Cog- nitive Maps and Wavelet Transform,” IEEE Transactions on Fuzzy Systems, 26(6), 3391–3402, 2018, doi:10.1109/TFUZZ.2018.2831640.
  59. T. Koutsellis, G. Xexakis, K. Koasidis, N. Frilingou, A. Karamaneas, A. Nikas, H. Doukas, “In-Cognitive: A web-based Python application for fuzzy cognitive map design, simulation, and uncertainty analysis based on the Monte Carlo method,” SoftwareX, 23, 2023, doi:10.1016/j.softx.2023.101513.
  60. D. Qin, Z. Peng, L. Wu, “Deep attention fuzzy cognitive maps for interpretable multivariate time series prediction,” Knowledge-Based Systems, 275, 110700, 2023, doi:10.1016/j.knosys.2023.110700.
  61. G. P. Reddy, Y. P. Kumar, “Explainable AI (XAI): Explained,” in 2023 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), 1–6, IEEE, 2023, doi:10.1109/eStream.2023.00001.
  62. Y. Zhang, Y. Weng, J. Lund, “Applications of explainable artificial intelligence in diagnosis and surgery,” Diagnostics, 12(2), 237, 2022, doi:10.3390/diagnostics12020237.
  63. A. Nielsen, S. Skylaki, M. Norkute, A. Stremitzer, “Effects of XAI on Legal Process,” ICAIL ’23: Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, 2023, doi:10.1145/3593013.3594067.
  64. P. Weber, K. V. Carl, O. Hinz, “Applications of Explainable Artificial Intelli- gence in Finance—a systematic review of Finance, Information Systems, and Computer Science literature,” Management Review Quarterly, 74, 867–907, 2023, doi:10.1007/s11301-023-00320-0.
  65. H. DeSimone, M. Leon, “Explainable AI: The Quest for Transparency in Business and Beyond,” in 2024 7th International Conference on Information and Computer Technologies (ICICT), 1–6, IEEE, 2024, doi:10.1109/icict62343.2024.00093.
  66. G. Na´poles, J. L. Salmeron, W. Froelich, R. Falcon, M. Leon, F. Vanhoen- shoven, R. Bello, K. Vanhoof, Fuzzy Cognitive Modeling: Theoretical and Practical Considerations, 77–87, Springer Singapore, 2019, doi:10.1007/978- 981-13-8311-3 7.
  67. L. Longo, R. Goebel, F. Lecue, P. Kieseberg, A. Holzinger, “Explainable artifi- cial intelligence: Concepts, applications, research challenges and visions,” in International Cross-Domain Conference for Machine Learning and Knowledge Extraction, 1–16, Springer, 2020, doi:10.1007/978-3-030-57321-8 1.
  68. F. Xu, H. Uszkoreit, Y. Du, W. Fan, D. Zhao, J. Zhu, “Explainable AI: A brief survey on history, research areas, approaches and challenges,” in Natural Lan- guage Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II, 563–574, Springer, 2019, doi:10.1007/978-3-030-32236-6 51.
  69. W. Saeed, C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities,” Knowledge-Based Systems, 263, 110273, 2023, doi:10.48550/arXiv.2111.06420.
  70. A. Toosi, A. G. Bottino, B. Saboury, E. Siegel, A. Rahmim, “A brief history of AI: how to prevent another winter (a critical review),” PET Clinics, 16(4), 449–469, 2021, doi:10.1016/j.cpet.2021.07.001.
  71. T. Hulsen, “Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare,” AI, 4(3), 652–666, 2023, doi:10.3390/ai4030034.
  72. R. Justo-Hanani, “The politics of Artificial Intelligence regulation and gover- nance reform in the European Union,” Policy Sciences, 55(1), 137–159, 2022, doi:10.1007/s11077-022-09452-8.
  73. P. Hacker, A. Engel, M. Mauer, “Regulating ChatGPT and other large generative AI models,” in Proceedings of the 2023 ACM Confer- ence on Fairness, Accountability, and Transparency, 1112–1123, 2023, doi:10.1145/3593013.3594067.
  74. E. Parliament, “EU AI Act: first regulation on artificial intelligence,” 2023.
  75. C. Panigutti, R. Hamon, I. Hupont, D. Fernandez Llorca, D. Fano Yela, H. Jun- klewitz, S. Scalzo, G. Mazzini, I. Sanchez, J. Soler Garrido, et al., “The role of explainable AI in the context of the AI Act,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1139–1150, 2023, doi:10.1145/3593013.3594069.

Citations by Dimensions

Citations by PlumX

Crossref Citations

This paper is currently not cited.

No. of Downloads Per Month

ASTESJ_090502 L
No. of Downloads Per Country

Special Issues

Special Issue on Computing, Engineering and Multidisciplinary Sciences
Guest Editors: Prof. Wang Xiu Ying
Deadline: 30 April 2025

Special Issue on AI-empowered Smart Grid Technologies and EVs
Guest Editors: Dr. Aparna Kumari, Mr. Riaz Khan
Deadline: 30 November 2024

Special Issue on Innovation in Computing, Engineering Science & Technology
Guest Editors: Prof. Wang Xiu Ying
Deadline: 15 October 2024