Awards 2025

ICOCT Best Paper Award:

Human-in-the-Loop Breakthrough in Federated AI Earns Best Paper Award at ICOCT 2025

The paper, authored by Ugandhar Dasi, Srinivas Chippagiri, Direesh Reddy Aunugu, and Vijayalaxmi Methuku, introduces the Human-in-the-Loop Federated Learning (HITL-FL) framework. This novel approach integrates domain expert feedback into all phases of federated learning from raw data validation and preprocessing to model interpretation and update governance.

Federated Learning (FL) has emerged as a promising solution for training machine learning models across distributed data sources while preserving privacy. However, FL faces challenges such as non-IID data, label noise, model drift, and limited interpretability—barriers that restrict its adoption in high-risk domains like healthcare and industrial IoT.

To overcome these issues, the authors developed HITL-FL, a data engineering framework that establishes a structured human-AI collaboration loop. Within this framework:

  • Experts validate and correct training data before local model training

  • Human insights guide feature engineering and help identify misleading inputs

  • Interpretability tools like SHAP and LIME are used to audit model behavior post-training

  • Policy-driven rules govern how and when model updates are accepted, flagged, or revised

The system includes modules for auditability, fairness evaluation, and feedback logging, ensuring that AI outcomes remain traceable, explainable, and aligned with ethical standards. Together, their collaboration resulted in a robust and scalable architecture that addresses not only technical efficiency but also social trust and operational ethics, both of which are often overlooked in conventional federated learning research. 

Real-World Validation and Impact

The framework was tested using two complex datasets:

  • MIMIC-III, a clinical dataset with sensitive patient health records

  • Edge-IIoTset, featuring multivariate time-series sensor data from industrial IoT systems

Through these experiments, HITL-FL demonstrated:

  • Up to 9% improvement in accuracy and F1-score

  • Substantial reductions in model drift and performance disparity across data silos

  • Higher auditability and faster convergence rates

  • Greater energy efficiency compared to traditional FL approaches

These results affirm the authors’ central thesis: embedding human expertise into decentralized AI systems improves not just performance, but also trust and resilience.

Shaping the Future of Responsible AI

The paper also introduces a set of custom trust metrics—including Model Drift Score (MDS), Fairness Deviation Index (FDI), and Auditability Index (AI)—to monitor system reliability across distributed nodes. When these metrics exceed thresholds, the system triggers feedback loops and expert intervention, creating a self-correcting governance cycle.

The research team emphasizes that real-world AI deployments, particularly in sensitive and regulated environments, demand more than just algorithmic efficiency—they require traceable, explainable, and accountable models. Their future work will explore real-time expert feedback integration, adaptive human monitoring based on risk analysis, and broader application across hierarchical federated systems.

By integrating machine learning with structured human oversight, this award-winning paper sets a new benchmark for trust-centered AI in decentralized ecosystems—marking a vital step toward scalable, transparent, and ethically aligned AI infrastructure.

About the authors:

Ugandhar Dasi, a Principal Engineer and data systems architect, led efforts on policy-driven governance, model auditing, and integrating trust metrics into the federated lifecycle.

Srinivas Chippagiri, Senior Member of Technical Staff and a leading expert in cloud computing and AI infrastructure, designed the architectural foundation of the system and contributed to the development of fairness metrics and explainability workflows.

Direesh Reddy Aunugu, an independent AI researcher, contributed to simulation strategy, edge-node orchestration, and validation using synthetic and real-world datasets.

 

Vijayalaxmi Methuku, specializing in explainable AI and semantic modeling, focused on the integration of SHAP/LIME interpretability and developed the feedback interfaces for human-in-the-loop evaluation.