Project

Exploring interpretable machine learning models in business analytics

Code
bof/baf/4y/2024/01/716
Duration
01 January 2024 → 31 December 2025
Funding
Regional and community funding: Special Research Fund
Research disciplines
  • Natural sciences
    • Data mining
  • Social sciences
    • Mathematical methods, programming models, mathematical and simulation modelling
Keywords
Explainable AI business analytics Data processing and machine learning
 
Project description
Enhancing Transparency and Trust in Business Analytics through Explainable AI Models
 
Objective:
This research aims to investigate and develop explainable AI models that can improve transparency, interpretability, and trust in machine 
learning applications within the domain of business analytics. The focus will be on creating models that provide clear insights into their 
decision-making processes, enabling businesses to make informed decisions based on model outputs.
 
Background:
Machine Learning (ML) has become increasingly prevalent in business analytics, offering powerful tools for data-driven decision making. However, many ML models function as "black boxes," providing predictions without clear explanations of how these outcomes were reached. This lack of transparency hinders trust and limits the adoption of such models, especially in critical business decisions where accountability is essential.
 
Methodology:
The study will employ a mixed-methods approach, combining quantitative analysis with qualitative insights:
 
1. Literature Review: A comprehensive review of current literature on explainable AI (XAI), focusing on techniques applicable to business 
analytics, will be conducted to establish a theoretical foundation.
 
2. Model Development: Based on the literature review, several XAI models will be developed or adapted for application in business analytics 
scenarios. These models will aim to strike a balance between predictive performance and interpretability.
 
3. Case Studies: Collaborations with industry partners will provide real-world datasets and business problems for case studies. These studies will test the developed models' ability to deliver accurate, actionable insights while providing clear explanations of their decision-making processes.
 
4. Evaluation Framework: A multi-dimensional evaluation framework will be established to assess the models on criteria such as predictive 
accuracy, explainability, robustness, and usability in a business context.
 
Expected Outcomes:
The research is expected to yield several key outcomes:
 
1. Novel XAI models specifically tailored for business analytics applications.
2. A comprehensive evaluation of these models across multiple dimensions relevant to business users.
3. Actionable insights and best practices for integrating XAI into existing business analytics workflows.
 
Impact:
This study has the potential to significantly impact how businesses leverage AI, making ML models more accessible, transparent, and trustworthy for decision-makers. By enhancing interpretability, this research can facilitate wider adoption of AI in business contexts, leading to more informed and confident decision-making processes.