|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
February 2026 | Vol. 104
No.4 |
|
Title: |
EXPLORING DIGITAL WALLET CONTINUANCE: INSTRUMENT DEVELOPMENT BASED ON
TASK-TECHNOLOGY FIT THEORY AND PRIVACY CALCULUS THEORY |
|
Author: |
ADIBAH AHMAD, RAHAYU AHMAD, SYAHIDA HASSAN |
|
Abstract: |
With the growing adoption of digital wallets, increasing attention has been
directed toward the challenges associated with their continued use, particularly
regarding security and user privacy. This study extends prior research by
integrating concerns related to technology fit and privacy through the lens of
Task-Technology Fit (TTF) Theory and Privacy Calculus Theory, which serve as the
study’s theoretical foundations. To address emerging factors influencing the
sustained use of digital wallets, new constructs have been proposed. The
measurement instruments employed in this study consist of both adapted items
from existing literature and newly developed items tailored to the proposed
constructs. The development process included expert evaluations to ensure
content validity, involving five Information Systems scholars who assessed the
relevance and clarity of the items. A pilot test was conducted involving 38
university students who participated through an online survey. Validity and
reliability testing were conducted using SPSS software. Results from validity
and reliability analyses demonstrate robust measurement properties, indicating
that the items effectively capture the intended constructs. A subsequent pilot
test further confirmed the appropriateness of the instruments for use in future
empirical research. |
|
Keywords: |
Cashless, E-Wallet, TTF, PCT, Continuous Use |
|
DOI: |
https://doi.org/10.5281/zenodo.18822662 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
DUAL-MODE, RATE-AWARE SWEAT SENSING WITH UNCERTAINTY-INFORMED ANALYTICS FOR
HYPERHIDROSIS SCREENING AND MONITORING |
|
Author: |
PADMA BELLAPUKONDA, DR. RAGHVENDRA KUMAR, DR. R N V JAGAN MOHAN |
|
Abstract: |
Wearable sweat sensing enables non-invasive screening and monitoring, but
concentration-only readouts are confounded by secretion dynamics such as
instantaneous rate and accumulated volume and by evaporation. These limitations
are especially problematic in clinically important low-sweat regimes, including
mild or treatment-modulated states, observed in hyperhidrosis. We present a
dual-mode, rate-aware platform that fuses resistive wetting and contact
conductance with a capacitive absorbent-dielectric channel to infer local sweat
volume and rate, supported by a skin-interfaced microfluidic layer for
chrono-sampling and evaporation compensation. On this hardware, we introduce
RAISE, Rate-Aware Inference with Sensor Ensembles, an uncertainty-informed
pipeline that derives rate-normalized features, applies probability calibration
using slope and intercept with expected calibration error and Brier score, and
quantifies clinical utility with decision-curve analysis, while reporting
uncertainty using bootstrap confidence intervals. Using patient-level splits and
an external temporally held-out validation cohort, our approach improved
discrimination and reliability over a concentration-only baseline, with delta
AUC of 0.06 and external AUC of 0.92 with 95 percent confidence interval from
0.88 to 0.95. It reduced expected calibration error to 0.031 and Brier score to
0.121, and yielded higher net benefit across clinically relevant thresholds,
with maximum delta of 0.06 and up to 7 avoided interventions per 100 at
probability threshold 0.10. Sensor characterization achieved mean absolute
percentage error below 10 percent versus gravimetry with R squared up to 0.992
across four infusion rates. By coupling dual-mode sensing with rate-aware
calibration and decision-focused analytics, the system delivers clear,
clinically interpretable gains for dependable hyperhidrosis screening and
monitoring. |
|
Keywords: |
Sweat Sensing; Hyperhidrosis; Microfluidics; Dual-Mode Capacitive–Resistive
Sensors; Probability Calibration; Decision-Curve Analysis. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822683 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
THE TIME-ORIENTED LEARNER: A FRAMEWORK FOR DESIGNING PERSONALIZED MOBILE
INTERFACES TO ENHANCE TECHNOLOGY ACCEPTANCE AND SELF-EFFICACY, AND REDUCE
COGNITIVE LOAD |
|
Author: |
ZHAO XIN, SITI NAZLEEN ABDUL RABU |
|
Abstract: |
Mobile learning (m-learning) design often relies on a one-size-fits-all,
universal approach. This approach may create cognitive friction for some
learners. This occurs when their inherent time orientation, a stable trait
governing how individuals manage tasks and time, is misaligned with the
interface's structure. This study introduces and evaluates Time-Oriented
Personalized Interfaces (TOPI), a novel approach that aligns interface design
with learners' monochronic, polychronic, or neutral dispositions. We
investigated the effects of this temporal alignment on university students'
technology acceptance, self-efficacy, and cognitive load. A sequential
explanatory mixed-methods design was employed with 150 university students. In
the quantitative phase, monochronic and polychronic learners were assigned to
interact with either a personalized interface matched to their time orientation
or a universal interface serving as a control. Neutral learners were assigned to
the universal interface to assess its baseline effectiveness. Pre- and
post-tests were used to measure the key variables. Subsequently, semi-structured
interviews with 30 participants provided qualitative insights into their
experiences. Quantitative results revealed that, for both monochronic and
polychronic learners, using a temporally aligned interface led to significantly
higher technology acceptance and self-efficacy, as well as significantly lower
cognitive load, compared to the universal baseline. Qualitative analysis
confirmed these findings, showing monochronic learners valued structured,
sequential workflows while polychronic learners thrived in flexible,
multitasking environments. The universal interface was found to be effective and
cognitively manageable for neutral learners. This study makes a significant
contribution by establishing time orientation as a critical, empirically
validated dimension for personalization in m-learning. Theoretically, it extends
Person-Environment Fit theory to the design of digital learning environments.
Practically, it provides a novel framework (TOPI) with actionable principles
that challenge the dominance of one-size-fits-all design by demonstrating a
clear path toward creating more effective, adaptive, and user-centered learning
systems. This study provides compelling evidence that while a generic interface
may be adequate for neutral learners, it is suboptimal for those with stronger
temporal dispositions. Designing adaptive interfaces that accommodate temporal
diversity is a key strategy for enhancing learning quality beyond mere
usability, offering a practical framework for creating more effective and
engaging digital learning environments. |
|
Keywords: |
Time Orientation; Personalized Interface Design; Educational Technology; Mobile
Learning; Cognitive Load; Person-Environment Fit; Adaptive Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822693 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
END-TO-END SECURE COMMUNICATION IN FOR WIRELESS MULTIMEDIA SENSOR NETWORKS VIA
MODIFIED GORILLA TROOPS OPTIMIZER DRIVEN DATA COMPRESSION WITH ENCRYPTION
APPROACH |
|
Author: |
K. VINAYAKAN, V. ALAMELU MANGAYARKARASI, Dr. A. DINESH KUMAR, M. VASUKI, R.
JAYAKUMAR |
|
Abstract: |
Wireless Multimedia Sensor Networks (WMSNs) have shifted the focus from
traditional scalar wireless sensor networks (WSNs) to networks equipped with
multimedia devices capable of capturing images, audio, video, and scalar sensor
data. WMSN can provide multimedia content owing to the availability of low-cost
CMOS microphones and cameras in addition to the considerable progress in
multimedia source coding and distributed signal processing techniques. In
comparison to a compressed image, an uncompressed image transmission consumes
more energy, and, as a result, it becomes a necessity to establish an
energy-aware compression technique to prolong the sensor node and the network
lifetime. Simultaneously, security encryption systems can be used to protect the
data being transferred, which ensures integrity and confidentiality against
tampering and unauthorized access. This study develops an Integrated Approach
for Secure and Energy-Efficient Data Transmission in Wireless Multimedia Sensor
Networks (IASEE-DTWMSN) technique. The purpose of the IASEE-DTWSN method is to
ensure security and maximum energy efficiency in WMSN via data compression and
encryption algorithms. Initially, the IASEE-DTWSN technique compresses the
images captured by WMSN using a discrete cosine transform (DCT) approach.
Besides, the secure transmission of the compressed data can be accomplished
using an advanced encryption standard (AES) model. Furthermore, the IASEE-DTWSN
technique involves the design of a Modified Gorilla Troops Optimizer (MGTO) for
optimizing the DCT and AES models. The MGTO approach is applied for determining
the optimum quantization parameters of the DCT methodology and encryption key
selection of the AES model in such a way that the compression ratio and PSNR are
maximized. To validate the performance of the IASEE-DTWSN methodology, a broad
array of simulations was involved. The experimental outcomes inferred that the
IASEE-DTWSN model resulted in enhanced energy efficiency and security in the
WMSN. |
|
Keywords: |
Wireless Multimedia Sensor Networks; Advanced Encryption Standard; Gorilla
Troops Optimizer; Data Compression; Encryption |
|
DOI: |
https://doi.org/10.5281/zenodo.18822704 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
EXPLORING AI-GENERATED VISUAL THEMES IN ASIAN ART AS A FOUNDATION FOR ADVANCING
DIGITAL ART RESEARCH |
|
Author: |
DEVI YURISCA BERNANDA , FILSCHA NURPRIHATIN , JOHANES FERNANDES ANDRY , FRANCKA
SAKTI LEE , ANGIE WIYANI PUTRI , RION COMERON |
|
Abstract: |
This study aims to analyze emerging trends, and visual themes present in AI
generated artworks across Asia by employing Descriptive, Predictive, and
Prescriptive Analytics using the Tableau platform. The dataset utilized in this
research comprises 10,000 AI generated image entries, focusing specifically on
attributes such as digital art genre, production time, and Asian regional
context. Through comprehensive analytical stages, the study examines how AI
models construct visual styles, reinterpret artistic traditions, and synthesize
cultural elements within the broader landscape of contemporary digital art. The
primary challenge addressed in this research concerns the difficulty of
identifying promising digital art genres for further exploration. This
complexity arises from the increasingly diverse stylistic variations, the
continuous evolution of visual themes, and the limitations in categorizing AI
generated artworks that blend multiple genres or introduce novel styles that
lack clear definition. To address these challenges, the methodological framework
consists of four major stages: data acquisition and preprocessing using publicly
available datasets, feature extraction and exploratory data analysis to uncover
meaningful patterns and genre distributions, predictive and prescriptive
analytics to determine potential genre trajectories and provide actionable
recommendations and validation and interpretation to ensure analytical
robustness and contextual relevance. The findings highlight the importance of
sustained thematic innovation, deeper exploration of visual styles, and
deliberate integration of localized cultural motifs to ensure that AI generated
art remains relevant, competitive, and culturally grounded within Asian markets.
Furthermore, strategic recommendations are formulated for each identified genre
category to enhance artistic quality and strengthen the market positioning of AI
generated digital artworks. This research is expected to contribute
significantly to the advancement of AI driven artistic production in Asia by
offering insights that are technical, aesthetic, and cultural in nature,
ultimately supporting the development of more informed, sustainable, and
contextually responsive digital art practices. |
|
Keywords: |
Artificial Intelligence, Digital Art, Dataset, Predictive Analytics, Art Genres |
|
DOI: |
https://doi.org/10.5281/zenodo.18822713 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
REAL-TIME INTELLIGENT CLAIM VERIFICATION USING BLOCKCHAIN-BASED RPA AND XGBOOST
FOR SCALABLE HEALTHCARE FRAUD DETECTION |
|
Author: |
DR.V. BALASANKAR, DR. V SRINADH, DESIDI NARSIMHA REDDY, JALA PRASADARAO, K
SWETHA, ELANGOVAN MUNIYANDY |
|
Abstract: |
Healthcare insurance fraud has been a major problem and requires smart,
scalable, and auditable detection systems. The study presents a framework of
Robotic Process Automation (RPA) with Integrated Blockchain technology and
integrates the methods of detecting fraudulent claims in real-time with Machine
Learning (ML) and RPA. The system is developed in five layers, including data
input, automation, intelligence, smart contracts, and blockchain ledger. It
works on a structured dataset of 4,000 healthcare claims having 83 attributes.
Claim intake and rule-based screening are automated by RPA bots, and fraud
classification against predefined risk thresholds is done by an ML engine driven
by extreme gradient boosting using smart contracts, with final results being
impartially stored by Hyperledger Fabric. The proposed framework significantly
outperformed the traditional models of predicting data (Convolutional Neural
Network, Recurrent Neural Network, Decision Trees, and Logistic Regression) with
a result of 92.4 percent of classification accuracy. It guarantees scalability
and traceability without compromising the latency by making claims in under 2.5
seconds. Measurements of feature importance, e.g. KullbackLeibler Divergence,
Entropy, and Gini, are more explainable and guide automation logic. What this
hybrid framework will achieve is a new standard of real-time, audible, and
intelligent fraud detection in healthcare insurance systems that will provide a
technically sound and legally compliant basis of foundation to next-generation
claims processing. |
|
Keywords: |
Healthcare Insurance, Blockchain, Fraudulent Claims, Hyperledger Fabric,
Robotic Process Automation. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822727 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
EFFINET-CL: A FRAMEWORK FOR LEAF DISEASE DETECTION AND CLASSIFICATION USING
CONTINUAL LEARNING |
|
Author: |
SREYA JOHN, Dr. P. J. ARUL LEENA ROSE |
|
Abstract: |
This paper presents an innovative approach to detect and classify leaf diseases
using EfficientNetB3 combined with continual learning strategies. The proposed
model leverages data augmentation techniques for improved generalization and
integrates continual learning through Elastic Weight Consolidation (EWC) and
Replay Memory to prevent catastrophic forgetting. This approach keeps the
model's knowledge of formerly learnt diseases intact while permitting it to
learn continually from new disease classes. The integration of an
Out-of-Distribution (OOD) detection mechanism further improves the ability of
the model to recognize unknown diseases, making it a reliable tool for real-time
agricultural applications. The model demonstrated excellent performance with an
accuracy of 98.44%, highlighting the potential of deep learning combined with
continual learning for plant disease diagnosis, and outperformed ResNet50,
DenseNet121, InceptionV3, and MobileNet-V2 algorithms. |
|
Keywords: |
Leaf Disease Detection, Feature Extraction, Classification, Deep Learning,
Continual Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18822739 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
SOCIAL NETWORKS AS A COMMUNICATION ENVIRONMENT FOR YOUNG PEOPLE: INFORMATION
THREATS AND EUROPEAN APPROACHES |
|
Author: |
LIUDMYLA MIALKOVSKA, IRYNA ZABIIAKA, VITA STERNICHUK, LARYSA PYLYPIUK, YULIIA
LITKOVYCH, LIUDMYLA DOLOZHEVSKA |
|
Abstract: |
The aim of the study is to identify linguistic tools for creating information
risks in the interpersonal interaction of young people in the field of social
networks and to find ways to develop adjusted countermeasures based on the
European experience in the field of media literacy. The study was characterized
by an interdisciplinary strategy that combined typological analysis of
linguistic phenomena, mechanistic representation of cognitive processes, and
comparative analysis of European media education practices. The research is
based on the synthesis of forty-six scientific publications, archives of
fact-checking organizations, and official documents of the European Commission.
Еhe three most serious risk profiles among young people were identified:
emotional and psychological, cognitive and cultural, and digital and legal. It
was found that the combination of emojis with youth slang constitutes the
highest risk index, reaching a value of 0.92 conventional units. By increasing
the level of intra-sentence code-switching by 34 percent, young people tend to
believe highly questionable sources more than neutral messages. TikTok video
memes proved to be the most effective in spreading disinformation among young
people, reaching a score of 8.1 conventional units due to a combination of 96%
spread rate and only 28% critical reflection on the content. Adolescents aged
13-16, who have an emotionally impulsive personality type and low digital
literacy, demonstrate a total risk of 7.8 conventional units and are the most
vulnerable group.ь The effectiveness of such an adaptation as the model of
European media literacy strategies implemented in Ukraine has the potential to
be 69.8-85.1% of international standards. The results of the study are used to
formulate a theoretical framework focused on the development of individual
educational activities aimed at combating disinformation, which would take into
account the communicative behavior of adolescents in the bilingual sphere of the
Ukrainian population. |
|
Keywords: |
Internet Communication, Social Networks, Normative Communication, English,
Ukrainian, Social Networks, Stylistics, Memes, Emojis, Abbreviations |
|
DOI: |
https://doi.org/10.5281/zenodo.18822757 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTELLIGENT PRIORITY-BASED TELEMEDICINE ALGORITHM FOR REAL-TIME EMERGENCY
HEALTH MANAGEMENT IN MILITARY ENVIRONMENTS |
|
Author: |
DR.S. HEMALATHA, DR. VIJAYA KUMBHAR, SURYA LAKSHMI KANTHAM VINTI, K. PADMAPRIYA,
DR.S. MURUGANANDAM, S. K. SATYANARAYANA |
|
Abstract: |
In military environment especially in the battlefield duration the timely
medical treatment is saving life, but the conventional telemedicine scheme is
relay on first-come, first-served (FCFS) scheduling which causes the failed
treatment for the emergency patient. To overcome such a scenario many other
methods were proposed with latest technologies still the need of novel method to
save such a people. This article is address the same , by proposing the novel
algorithm proposes an Intelligent Priority-Based Telemedicine Algorithm (IPTA)
which is assumed as a wearable device with the modern technologies of sensors,
Internet of Thing and Machine learning to predict the soldiers vital parameters
such as heart rate, oxygen saturation, blood pressure, and body temperature are
continuously monitored and analyzed through machine learning–based predictive
analytics to estimate each soldier’s health criticality. This proposed framework
automatically identified the soldiers body condition and assign the priority to
meet the telemedicine doctor. This experiment was tested using the simulation
tools of MATLAB using Python and OMNET++ for connectivity related Quantitative
and qualitative metric comparison with the traditional methos of FCFS, Rule
Based and Fuzzy logic methods. Quantitative results expose proposed IPTA
framework shown the improvements in accuracy (91.6%), resource utilization
(91%), and packet delivery ratio (94%), latency (85 ms) and average response
time (2.9 s). Also, system throughput increased by 31%, and survival probability
improved by 24%. |
|
Keywords: |
Telehealth, Internet of Things, Wearable Device, Telemedicine Algorithm,
Artificial Intelligence |
|
DOI: |
https://doi.org/10.5281/zenodo.18822763 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
CAN FEDERATED DIFFERENTIALLY PRIVATE MULTI-MODAL TRANSFORMERS IMPROVE RARE
HEALTHCARE FRAUD DETECTION? |
|
Author: |
VISHAL NAMIREDDY, DR. DURAIRAJ.M, DESIDI NARSIMHA REDDY, UDAYALAXMI ADITYA TEKI,
ELANGOVAN MUNIYANDY, BALAKRISHNA BANGARU |
|
Abstract: |
An ongoing challenge in healthcare insurance fraud detection is designing
intelligent algorithms that identify subtle and evolving fraud patterns while
preserving patient privacy. We propose FraudBERT-MM, a multi-modal framework
that fuses structured claim features with unstructured textual explanations to
form comprehensive claim representations. The architecture combines a BERT-based
text encoder, an MLP-based structured encoder, a cross-modal fusion layer and a
binary classification head. To address scarce labelled fraud examples, we apply
prototype-based few-shot learning with episodic training. For privacy-preserving
collaboration, models are trained in a federated learning setup augmented with
Gaussian differential privacy, achieving a privacy budget of ε = 1.87. Unlike
prior work that assumes large centralized labeled datasets and single-modality
inputs, FraudBERT-MM integrates multi-modal learning, few-shot adaptation and
formal privacy guarantees in one unified pipeline. We evaluate the approach on a
real-world dataset of 4,000 claims with 83 features. FraudBERT-MM attains an F1
score of 0.882, outperforming centralized non-private, text-only and
structured-only baselines. The model also converges 26% faster than the
centralized baseline while preserving strong privacy, with only a small utility
loss from noise. Experimental results indicate improved detection of rare and
emerging fraud trends without compromising data privacy. We provide
implementation details, ablation studies and deployment guidelines to support
reproducibility and accelerate safe adoption in clinical and insurance settings
across diverse operational contexts. These findings suggest that combining
multi-modal representations, few-shot adaptation and federated differential
privacy offers a practical and effective path toward deployable, privacy-aware
healthcare fraud detection systems. |
|
Keywords: |
DDoS Detection, Contrastive Learning, Large Language Models (LLM), Firewall,
Hybrid Model. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822773 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
ARTIFICIAL INTELLIGENCE-BASED EARLY PREDICTION OF THERMAL RUNAWAY IN AGV BATTERY
CELLS |
|
Author: |
OH CHANG-HO, MA TAE-YOUNG, KIM YONG-KUK, GEUN-JIN AHN, PARK SUNGBUM |
|
Abstract: |
Automated Guided Vehicles (AGVs) in semiconductor manufacturing operate
continuously with lithium-ion battery packs which undergo thermal runaway under
abusive conditions such as over-charge, over-discharge or cooling failure. This
study presents an artificial-intelligence (AI) - based early - warning framework
that predicts imminent thermal runaway in AGV battery cells. We collected a
large-scale dataset of 12.7 million time-series records from the
battery-management system (BMS) of AGVs in the real factory environment,
including cell-level temperature, voltage, current, alarms and cycle counts
information. Three predictive models — XGBoost, 1D CNN and LSTM — were developed
to forecast the next-step maximum cell temperature using a 24-minute sliding
window. LSTM achieved the best performance (MAPE ≈ 1.66 %) compared to CNN (≈
1.67 %) and XGBoost (≈ 1.76 %). Based on LSTM residuals and predicted threshold
exceedance, an early warning mechanism was implemented, raising alarms several
minutes before the temperature crossed critical limits. The results demonstrate
that AI-driven forecast algorithm enables practical pre-runaway warning and can
be embedded in AGV safety-monitoring systems in high-throughput semiconductor
logistics applications. |
|
Keywords: |
Thermal runaway, Lithium-ion battery, Automated Guided Vehicle (AGV), LSTM, CNN,
XGBoost, Early warning, Battery management system (BMS) |
|
DOI: |
https://doi.org/10.5281/zenodo.18822793 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
PERSONALIZED RESEARCH RECOMMENDATION BASED ON TEMPORAL FUSION TRANSFORMER FOR
AMAZON FASHION PRODUCTS |
|
Author: |
GOKILAVANI A, DR P AMUDHA |
|
Abstract: |
Personalized product recommendations have become more valuable in recent years
as a means of improving the online shopping experience for customers. However,
accurate sales forecasting for fashion products remains challenging due to high
demand variability, short product life cycles, and complex temporal dependencies
influenced by pricing, discounts, and customer behavior. A robust framework for
sales prediction is created using a structured pipeline that integrates data
collection, preprocessing, feature selection, predictive modeling, and
performance analysis. The first step in the inquiry is gathering sales data from
Amazon, a dataset known for its size, complexity, and range of features. To
ensure data quality and relevance, data is put through preparation processes
like data cleaning, transformation, and exploratory data analysis (EDA), which
help to identify patterns and correlations in the dataset and prepare it for
modeling. During the feature selection stage, the Boruta algorithm is employed,
utilizing its ability to identify significant characteristics that improve the
model's predictive capabilities. This approach retains only the most relevant
information, reducing computing complexity and increasing model efficiency.
For the prediction phase, three different models are examined: a CNN-BiLSTM
model, which combines convolutional layers for feature extraction with
bidirectional LSTMs for sequential learning; a Temporal Fusion Transformer
(TFT), which is well-known for its interpretability and ability to control
temporal dynamics; and a hybrid model that incorporates aspects of both CNN and
TFT. This combination aims to capture complex temporal patterns and interactions
within the data to optimize forecast accuracy.The models' performance is
evaluated using a variety of error metrics, including Mean Absolute Percentage
Error (MAPE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and
Weighted Mean Absolute Percentage Error (WMAPE). These measures provide a
comprehensive picture of prediction accuracy by highlighting both absolute and
relative errors. By using a range of evaluation criteria, this study ensures
that the models' performance is looked at from multiple perspectives, offering
insights into the models' practicality and robustness. Experimental results show
that the proposed hybrid CNN-TFT model outperforms CNN-BiLSTM and standalone TFT
models, achieving an accuracy of 97.25% with lower MAPE (0.383) and RMSE
(0.1908). the results demonstrate that the proposed hybrid framework provides
reliable and accurate sales forecasts, supporting effective demand planning,
inventory optimization, and decision-making in fashion e-commerce platforms. |
|
Keywords: |
Temporal Fusion Transformer, CNN, LSTM, Fashion Sales, Prediction. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822814 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
SIMPLIFIED ROUTING FOR MOBILE AD HOC NETWORKS USING A COMBINED RESOURCE METRIC |
|
Author: |
MOHAMMED ABDUL BARI, ABDUL WASE MOHAMMED, ARSHAD AHMAD KHAN MOHAMMAD, SIREESHA
MOTURI, ANUSHA MAROUTHU |
|
Abstract: |
Mobile Ad Hoc Networks are autonomous, infrastructure-less networks consisting
of wireless mobile devices. Within the network, “the sender communicates with
the receiver only if it is in radio communication range of one another;
otherwise, it must communicate through intermediate nodes. To enable reliable
communication, cooperation among intermediate nodes and the use of their
resources are essential; otherwise, packets are dropped. The primary reasons for
these drops are “battery drain,” “full buffers,” and “high traffic at the
intermediate nodes.” These packet drops are unintentional. This paper proposes a
simplified routing method to address undesirable packet drops by planning routes
that avoid congestion and maximize resource utilization at nodes. These routes
are determined based on each intermediate node's remaining power, available
memory, and current traffic levels. Performance evaluation of this protocol is
conducted on a simulated network and compared with other standard reactive
methods, accounting for uniform traffic distribution. The simulation results
show that this protocol can significantly reduce undesirable packet drops and
increase successful transmissions, ensuring a reliable communication system for
critical applications, including disaster relief operations and healthcare
systems. |
|
Keywords: |
Mobile Ad Hoc Networks, Packet Drops, Routing, energy, buffer, Packet Delivery. |
|
DOI: |
https://doi.org/10.5281/zenodo.18822819 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTEGRATED ENHANCED DEEP LEARNING MODEL DETECTING AND CLASSIFYING CROP
DISEASES IN THE EARLY STAGES TO MAXIMIZE THE YIELD |
|
Author: |
DHIRAJ BHISE, SUNIL KUMAR, HITESH MOHOPATRA, BHUPESH KUMAR SINGH |
|
Abstract: |
Agriculture sector is the most promising sector which plays crucial role in
Economy data of every country. There are numerous challenges are being faced by
the farmers. Hence, researchers and academicians are trying to overcome these
challenges by introducing new techniques, machine learning and AI models. Among
the many challenges faced by farmers, leaf diseases stand as formidable
adversaries, persistently diminishing their harvests. In most cases, farmers’
harvests are diminished by leaf diseases which directly affect the crop yield.
Researchers and academicians are developing a generalized model applicable to
all the crops. In the initial stage we have focused on wheat crop. Wheat is
cultivated at large and is consumed by maximum people to its status as a staple
food. Similarly in the next subsequent development of generalized model we will
focus on many other crops. Wheat plant diseases affect the yield in the recent
decades. Decreased crop yield directly affects the economic growth of the
country and have social impact on farming. Recent technological advancements
using artificial intelligence (AI), machine learning (ML) and deep learning (DL)
have made it easier to identify diseases on leaves in their early stages. With
the help of recent technologies it became possible to develop ‘autonomous’
systems keeping eye on the ‘diseases’ in the early stages. In this approach, we
have used an open dataset from Plant Village which has more than images of 5000
plant leaf diseases divided into 04 types of leaf diseases along with the normal
leaf. The proposed system is a combination of CNN and Mask RCNN. In CNN method
we have successfully identified the type of disease for the input image whereas
Mask RCNN masks the exact area covered by the disease of the leaf. Proposed
integrated deep learning model applies the convolutional neural networks (CNN)
for better accuracy (97 % accuracy), classification, and identifying the disease
affected region. In the proposed work, model integrates 02 different methods CNN
and Mask RCNN. |
|
Keywords: |
Deep Learning, Machine Learning, Region Based Convolutional Neural Network, Leaf
Disease; Artificial Intelligence, Classification Model, Forecast Illnesses. |
|
DOI: |
https://doi.org/10.5281/zenodo.18823735 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
EDGE-CENTRIC PRIVACY-PRESERVING VIDEO ANALYTICS FOR LATENCY-SENSITIVE MONITORING |
|
Author: |
DR GAURAV VISHNU LONDHE, DR. S. MAHESWARI, DR.V.RADHIKA, LAVANYA KUMARI PITHANI,
V SIVARAMARAJU VETUKURI, ELANGOVAN MUNIYANDY |
|
Abstract: |
Real-time video analytics at the network edge is essential for smart
surveillance, autonomous monitoring, and safety-critical decision making.
However, maintaining high detection accuracy under severe latency limits and
demonstrable privacy protection is difficult. Raw video transmission on the
cloud increases delay and privacy risks, while entirely on-device solutions are
limited by computational and energy resources.This paper presents a hybrid
edge-centric video analytics architecture that balances latency, accuracy, and
privacy via split inference, lightweight on-device processing, and
privacy-preserving feature preservation. A compact backbone's early layers do
motion filtering, region-of-interest extraction, and feature computing on the
device, while a nearby edge server performs deeper inference. Quantization and
differential privacy safeguard intermediate characteristics, with selective
homomorphic encryption applied to tiny crucial layers for better guarantees.
Based on bandwidth and device demand, an adaptive runtime controller chooses the
split point and compression level. On VIRAT, PEViD, and Sultani-derived
datasets, the suggested technique achieves sub-100 ms median end-to-end latency
and near-cloud detection accuracy. Results indicate that modest differential
privacy (ε ≈ 1) significantly reduces membership-inference attack success with
minimal accuracy loss, whereas full homomorphic encryption is too
latency-intensive. Split inference and lightweight differential privacy create a
realistic and deployable privacy-aware real-time edge video analytics solution. |
|
Keywords: |
Video Analytics, Split Inference, Differential Privacy, Homomorphic Encryption,
Latency Constraints, Adaptive Controller |
|
DOI: |
https://doi.org/10.5281/zenodo.18823749 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
QUANTUM-ENHANCED FEATURE EXTRACTION (QUFEX): A CONCEPTUAL FRAMEWORK FOR
PRECISION ATMOSPHERIC RIVER ANALYSIS |
|
Author: |
SIVACHITRALAKSHMI , CHITRA P |
|
Abstract: |
Atmospheric Rivers (ARs) are major contributors to global hydrological
variability and flood hazards, particularly in regions influenced by monsoonal
systems. Accurate detection of ARs remains a challenge due to their dynamic,
non-linear spatial patterns. This study presents a comparative evaluation of
three advanced deep learning models—U-Net, CGNet, and DeepLabV3+—and proposes
Q-ARNet, a quantum-enhanced atmospheric river detection framework based on a
Quantum Neural Network (QNN) for image segmentation. The experiments were
conducted on a dataset comprising 574 AR events from 1951 to 2020, incorporating
variables such as integrated water vapor (IWV), zonal and meridional wind
components (U850, V850), and sea-level pressure (PSL). Q-ARNet outperforms
current classical benchmarks by approximately 3.5% in accuracy while utilizing a
substantially reduced input feature set, thereby addressing the computational
bottleneck associated with high-dimensional climate data. The proposed framework
achieves an average accuracy of 97.5 ± 0.3%, an Intersection over Union (IoU) of
94.5 ± 0.4%, and an F1-score of 95.7 ± 0.2%, consistently surpassing classical
deep learning models. These findings highlight the potential of quantum neural
networks to enhance both the representational capacity and computational
efficiency of atmospheric modeling through quantum parallelism and entanglement.
The study contributes to the growing intersection of quantum computing and
climate informatics, offering a promising framework for improved flood
forecasting and climate risk management. |
|
Keywords: |
Atmospheric Rivers, Flood Forecasting, Deep Learning, Quantum Neural Networks,
Semantic Segmentation, Evaluation Metrics. |
|
DOI: |
https://doi.org/10.5281/zenodo.18823768 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
EFFICIENT MPPT FOR PV SYSTEMS UNDER SHADING CONDITION USING HYBRID METAHEURISTIC
OPTIMIZATION MODEL |
|
Author: |
HARITHA INAPAGOLLA , R ASHOK BAKKIYARAJ, KATRAGADDA SWARNA SRI |
|
Abstract: |
The increasing penetration of photovoltaic (PV) systems in the energy generation
mix make it necessary for Maximum Power Point Tracking (MPPT) techniques to be
more and more efficient. Solar (PV) systems are generally used under partial
shading and operate at many local peaks on the power-voltage (P-V) curve hence
making Global Maximum Power Point (GMPP) tracking hard. In such conditions,
traditional MPPT methods like Perturb and Observe (P&O) as well as Incremental
Conductance (IC), most of the time fails to track GMPP effectively, leading
towards higher power losses resulting in decreased efficiency. In this research
a new hybrid metaheuristic method, which combines Particle Swarm Optimization
(PSO) and Differential Evolution (DE), is suggested for improving MPPT
performance in PV systems. With the help of hybrid PSO-DE model, which has
better convergence speed and exploration property as well as robustness to local
minimum with random search directions (Differential Evolution), it helps locate
GMPP correctly even under various shading situations. The work provides novelty
by suggesting an organized hybrid PSO-DE based MPPT framework that strictly
weighs the quick convergence and worldly search to dependable Global Maximum
Power Point tracking to track the partially shaded conditions. The proposed
method incorporates both PSO exploitation and DE exploration in one optimization
loop, which makes it more stable, converges faster, and is more resilient than
current standalone or weakly connected hybrid MPPT techniques. The results
clearly indicate substantial improvements in the tracking speed, power output
and stability with respect to existing methods when the proposed hybrid PSO-DE
method is employed especially for not uniform set points. The results of this
study can help in improving the efficiency and reliability of solar energy
systems so that PSO-DE hybrid model usable for real world applications. This
work advances the field of MPPT by supporting new technologies to respond to a
fluctuating set behaviour for renewable energy systems and provides insight into
more efficient solar solutions. |
|
Keywords: |
Photovoltaic Systems, Maximum Power Point Tracking, Partial Shading, Global
Maximum Power Point, Hybrid Meta-heuristics, Particle Swarm Optimization,
Differential Evolution. |
|
DOI: |
https://doi.org/10.5281/zenodo.18823775 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
ONTOLOGY-AWARE MULTIMODAL NLP FRAMEWORK INTEGRATING TRANSFORMER–CNN FUSION FOR
ENHANCED MEDICAL DATA AND MANUSCRIPT ANALYSIS |
|
Author: |
EMMADI SUSHMA , Dr. SUKLA SATAPATHY |
|
Abstract: |
The exponential growth of unstructured medical data—ranging from clinical notes
and electronic health records (EHRs) to academic publications—creates
substantial difficulties for automated interpretation. Conventional natural
language processing (NLP) models, though useful for textual information, are
limited in their ability to assimilate additional modalities such as diagnostic
imaging, graphical records, and clinical diagrams, which are essential for a
holistic analysis. To address this limitation, we present an ontology-guided
multimodal NLP framework that integrates Transformer-based encoders for textual
inputs with convolutional neural networks (CNNs) designed for medical images and
diagrams. A specialized fusion layer combines these heterogeneous
representations, thereby enabling context-sensitive reasoning and predictive
modeling. The framework is further enhanced through the incorporation of
established medical ontologies, including UMLS and SNOMED CT, which improve
semantic accuracy and interpretation of domain-specific terminology. For
empirical validation, the system was tested using two benchmark datasets:
MIMIC-III, which provides comprehensive textual medical records, and Open-i,
which offers a large collection of biomedical images. The proposed framework
consistently outperformed leading baselines, achieving 97.36% accuracy, 96.82%
precision, 96.59% recall, and a 96.70% F1-score. In addition to strong
predictive performance, the architecture demonstrates scalability,
interpretability, and compliance with stringent privacy standards, making it
practical for deployment in real-world healthcare environments. By effectively
unifying multimodal information with domain-specific ontological knowledge, this
work delivers a computationally efficient and clinically relevant solution for
advancing automated medical document analysis and decision support. |
|
Keywords: |
Natural Language Processing; Deep Learning; Multimodal Data Integration;
Transformer Networks; Convolutional Neural Networks; Medical Ontologies; Medical
Informatics; Clinical |
|
DOI: |
https://doi.org/10.5281/zenodo.18823802 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
FE-DADT: A FEATURE ENGINEERING DRIVEN DDOS ATTACK DETECTION TECHNIQUE AND
EVALUATION FOR EFFECTIVE MITIGATION |
|
Author: |
SWATHI KATHI, NARSIMHA GUGULOTHU |
|
Abstract: |
Distributed denial-of-service (DDoS) attacks have been a persistent threat to
the continuous operation of network services by depleting its essential
resources and normal users cannot access the services. With the increasing
evolution of these attacks, simple detection of differentiating malicious
packets from innocent packets within traffic has become a complex task.
Therefore, effective mitigation relies on accurate identification of
discriminatory features of the network as opposed to similar (but non-redundant)
sample-level analysis in higher dimensional space. In this paper, we present a
filter-based feature selection method rooted in the statistical validity of the
Student t-test that identifies the strongest predictors from traffic flow data.
The task is to find a minimal but optimal set of features that improves
classifier performance by noise and redundancy removal. On CICDDoS2019 benchmark
dataset, the model was trained with 399,998 and tested on 112,611 samples. The
results show that t-test is able to detect 58 features with
statistical significace. With this optimized feature set, a eXtended Gradient
Boosting (XGBoost) classifier reached a 99.82% detection rate and 97.40%
classification accuracy. Our results show that a feature elimination approach
based on statistical grounds, strengthens a significant increase in the
performance of DDoS detection systems based on ML methods. |
|
Keywords: |
DDoS, Classification, Network traffic flow, Prediction, Feature selection,
Hypothesis Testing, Machine Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18823810 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED LEARNING WITH BLOCKCHAIN FOR SECURE AND SCALABLE FINANCIAL SERVICES |
|
Author: |
DR. NAIM SHAIKH, DR. A.PANKAJAM, DR. VIVEK VEERAIAH, SHEETAL PRADIP PATIL, DR.
MAMATHA G, DR. SRIDEVI.R, ANKUR GUPTA, DR. M.YELLAIAH NAYUDU |
|
Abstract: |
People are justifiably apprehensive about the safety, privacy, and scalability
of their data in collaborative machine learning contexts now that digital
financial services are available. But it is hard to get to, and different groups
don't trust it. Blockchain is a kind of distributed ledger technology that
stores and checks data in a way that doesn't need a trusted third party.
However, when used alone, it has problems with its size and battery use. This
research puts out a hybrid architecture that combines blockchain with FL to
solve these problems and develop financial services that can grow. This
approach, which uses blockchain technology, protects the data sovereignty of
organisations. It also makes it easier for financial organisations to work
together to build global models. It does this by being able to keep an eye on
changes, build trust, and provide people the chance to get personal incentives.
The suggested strategy is better than FL-only and blockchain-only solutions when
it comes to accuracy, scalability, and safety. The amount of energy used and the
time it takes to respond to queries are also maintained at a reasonable level.
Research can perform a lot of important things with the system, including look
into claims of fraud, assess credit risk, and stop money laundering. The
outcomes help make the financial industry's digital infrastructure safer, more
open, and more in line with the law. |
|
Keywords: |
Federated Learning, Blockchain, Financial Services, Security, Fraud Detection,
Credit Risk Analysis, Anti-Money Laundering. |
|
DOI: |
https://doi.org/10.5281/zenodo.18823826 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
INNOVATIVE APPROACHES TO EDUCATIONAL KNOWLEDGE GRAPHS: LEVERAGING NEO4J AND LLM
FOR SYNTHETIC DATA GENERATION |
|
Author: |
TANTY OKTAVIA , M BAGASKORO TRIWICAKSANA S |
|
Abstract: |
In todays rapidly evolving higher education landscape, the utilization of data
is critical for gaining insights that enhance institutional effectiveness.
However, accessing and leveraging this data presents significant challenges due
to privacy, licensing, and security constraints. This research evaluates the
effectiveness of using Large Language Model (LLM)-generated synthetic data for
constructing and visualizing educational knowledge graphs (KGs) using Neo4j. By
integrating Neo4j's graph capabilities with LLM's generative power, we developed
comprehensive knowledge graphs across five key university domains: curriculum
development, personalized learning, learning analytics, resource management, and
alumni engagement. Our methodology employs a two-stage prompting strategy with
Claude 3.5 Sonnet to generate synthetic datasets that simulate real-world
educational scenarios, achieving data completeness rates of 77.55-100% and
domain alignment scores of 82.84-100%. The synthetic data successfully captured
complex educational relationships through varied node types (5-16) and
relationship types (4-16) across domains, while relationship accuracy metrics
(38.21-46.55%) demonstrated focused, meaningful connections. Implementation
results show efficient processing times (3-8 seconds per domain) and strong
practical utility in modelling educational patterns and relationships. This
approach significantly reduces data collection time and resources while
maintaining data privacy and integrity. While limitations exist regarding
validation against real-world datasets, our findings demonstrate that
LLM-generated synthetic data can effectively prototype educational knowledge
graphs, offering institutions a scalable, privacy-preserving framework for
developing data-driven insights and decision-making capabilities. |
|
Keywords: |
Knowledge Graphs (KGs), Neo4j, Synthetic Data, Large Language Models (LLMs),
Curriculum Development, Personalized Learning, Learning Analytics, Resource
Management, Alumni Engagement |
|
DOI: |
https://doi.org/10.5281/zenodo.18823853 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
A BLOCKCHAIN-DRIVEN MULTI-AGENT FRAMEWORK FOR ENHANCING TRANSPARENCY,
EFFICIENCY, AND RESILIENCE IN SMART SUPPLY CHAIN MANAGEMENT |
|
Author: |
DR. P. SANTHOSH KUMAR, DR. A.PANKAJAM, DR. VIVEK VEERAIAH, DR. ADVETA GHARAT,
DR. MAMATHA G, DR. DEVIKA RANI ROY, ANKUR GUPTA, N RAJITHA |
|
Abstract: |
As global supply chains have evolved quickly, challenges including transparency,
collaboration, security, and the capacity to make choices in real time have
become increasingly important. The present research fails to offer a unified,
extensible framework that integrates decentralised trust, intelligent
optimisation, and resilience across diverse supply chain contexts, even though
blockchain and multi-agent systems have been examined independently to enhance
autonomy and traceability. Most of the research done before this one is either
too small in scope or hasn't been able to prove that integrated blockchain-MAS
systems work in real life. Also, most of the studies have only looked at single
services, like traceability or finance. The present study addresses this
informational gap by proposing a novel Blockchain-Driven Multi-Agent Framework
for the optimisation of supply chain activities from inception to completion.
This system combines decentralised ledger technology, autonomous agents,
reinforcement learning, and smart contract automation. The framework brings new
information to the table by adding a scalable on-chain/off-chain data strategy,
smart techniques to optimise for cost, resilience, and sustainability, and
formalising hybrid agent-based coordination on blockchain. In the agricultural,
industrial, and logistical sectors, simulation-based evaluations show big
improvements over traditional methods in the following areas: transparency
(+65%), speed of decision-making (+48%), time to recover from an interruption
(-60%), energy efficiency (+32%), and reduction of product spoilage (-66%). The
research contributes to the current knowledge on intelligent, decentralised
supply chains by offering a generalisable architecture, empirical performance
data, and practical insights. |
|
Keywords: |
Blockchain, Multi-Agent Systems, Smart Supply Chain, Reinforcement Learning,
Decentralized Optimization, Transparency, Resilience, Sustainability |
|
DOI: |
https://doi.org/10.5281/zenodo.18823863 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED DEEP LEARNING FOR DIABETES PREDICTION IN INDIA: ENHANCING PRIVACY AND
INTERPRETABILITY WITH DP-SGD AND LIME |
|
Author: |
P. CHITRALINGAPPA , DR RAMATENKI SATEESH KUMAR , ARTIKA FARHANA, SHYAMSUNDER
CHITTA , SRIDEVI GAMINI , SANDA SRI HARSHA |
|
Abstract: |
Diabetes prevalence in South Asia is increasing, and automated risk prediction
is necessary to enable early intervention. This work proposes FedHybNet, a
privacy-preserving federated hybrid architecture that combines a TabTransformer
backbone with per-client personalization, applies differentially private
stochastic gradient descent (DP-SGD) for formal Differential Privacy (DP)
guarantees, and uses Local Interpretable Model-agnostic Explanations (LIME) for
instance-level interpretability. FedHybNet protects client updates via DP-SGD
while preserving predictive utility through a hybrid aggregation scheme and
lightweight on-device personalization. Performance and interpretability were
evaluated on a recent public South-Asia tabular benchmark (DiaBD, 5,288 records,
2025) and on a larger synthetic India cohort (15,000 records) designed to match
national demographics and covariate distributions. Models were compared to
XGBoost (centralized classical baseline) and an advanced federated baseline
(FedDL). Experiments used stratified 5-fold cross-validation and reported AUROC,
AUPRC, F1-macro, expected calibration error (ECE), computational cost (GFLOPs),
training time, and privacy budget (ϵ,δ). FedHybNet achieved AUROC 0.89 and AUPRC
0.69 at ϵ=2.0, improving AUROC by 0.01–0.03 (≈1.1–3.5% relative) over baselines
and reducing calibration error (ECE) by 28.9–41.8% relative. Training cost
increased due to the Transformer encoder and DP overhead, but the
privacy–utility trade-off favored FedHybNet versus non-DP FL. These results
indicate FedHybNet is practical for privacy-aware, interpretable diabetes risk
prediction in distributed clinical settings. |
|
Keywords: |
Federated learning, Differential privacy, DP-SGD, Diabetes prediction, LIME,
Interpretability. |
|
DOI: |
https://doi.org/10.5281/zenodo.18824252 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
GRASSHOPPER OPTIMIZED VISION TRANSFORMER WITH ADAPTIVE TOKEN CALIBRATION DROPOUT
SCALING QKV ATTENTION REFINEMENT IN COTTON LEAF DISEASE IDENTIFICATION |
|
Author: |
K. KARTHIGA , Dr. B. RAJDEEPA |
|
Abstract: |
Accurate identification of cotton leaf diseases remains a complex task due to
high visual similarity among disease types, irregular lesion boundaries,
background interference, and the early-stage subtlety of symptoms in
field-captured images. Current literature lacks transformer-based disease
identification frameworks that dynamically regulate attention, token relevance,
and learning stability in response to lesion morphology. This study bridges that
gap by introducing an optimization-driven attention control mechanism that
generates new insights into lesion-focused transformer learning for precision
agriculture. Traditional models and vanilla Vision Transformers struggle with
fixed patch segmentation, redundant token processing, and diluted attention
focus, leading to suboptimal classification. To address these challenges, this
research proposes a novel model titled Grasshopper Optimized Vision Transformer
(GO-ViT), engineered for lesion-aware feature refinement in cotton leaf disease
identification. GO-ViT integrates swarm-based Grasshopper Optimization to
dynamically configure patch embedding size, attention head scaling, dropout
distribution, QKV matrix refinement, and entropy-based token pruning. This
biologically inspired regulation emulates adaptive foraging patterns, ensuring
high lesion focus and reduced attention waste. GO-ViT enhances inter-class
separation, preserves spatial continuity, and minimizes false activation over
non-symptomatic regions. Experimental evaluation across five performance metrics
confirms GO-ViT’s robustness, with 94.491% Overall Detection Efficiency and
88.985% Balanced Class Correlation. The model demonstrates superior ability to
isolate overlapping infections, suppress background noise, and deliver stable
predictions across diverse cotton leaf conditions. GO-ViT stands as a reliable
solution for scalable, high-precision disease classification in precision
agriculture, combining visual attention learning with nature-driven optimization
control. |
|
Keywords: |
Cotton Leaf Disease, Vision Transformer, Grasshopper Optimization, Token
Pruning, Attention Calibration, Lesion Segmentation |
|
DOI: |
https://doi.org/10.5281/zenodo.18824266 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
THE DEPLOYMENT OF DIGITAL HR TECHNOLOGIES AS A TOOL FOR ENHANCING ENTERPRISE
PRODUCTIVITY IN THE DIGITAL ECONOMY |
|
Author: |
SEVIL RAMAZANOVA , OLHA MIZINA , NATALIA NOVHORODSKA , ULIANA KOROLOVA , DENYS
KOROLOV |
|
Abstract: |
Relevance of the study The relevance of this study stems from the imperative
to integrate HRTech solutions to enhance productivity within the digital
economy, signifying a paradigm shift in human capital management, and the
necessity to institutionalize Employer Branding as a driver for sustainable
development. Purpose of the study The study aims at formalizing a hybrid
HRTech Framework integrated with Employer Branding, alongside verifying its
impact on labor productivity through econometric modeling, cognitive-emotional
stratification, as well as architectural interoperability. Research methods
The methodologies employed in this research include SWOT analysis, econometric
modeling, structural synthesis frameworks, cognitive-emotional modeling, and
econometric validation. Results obtained The integrative analysis revealed
the superior efficacy of People Analytics (0.84; ΔLP +18.2%; fluctuation
–25.3%), AI communications (0.82; TTTHR –21.4%; inclusivity +15.9%), and HRIS
(0.80; HRΔCOE –23.1%; adaptability +39.6%). Based on these findings, a
tri-layered HRTech Framework incorporating Employer Branding architecture was
devised, which facilitates cognitive traceability and brand extrapolation. Upon
validating the project framework, a noteworthy enhancement in key performance
indicators was documented (ΔLP +19%, TTTHR –24%, RRI = 0.84, AIU = 0.88, EER =
0.86), indicating a high degree of cognitive relevance, HR conversion
efficiency, and digital adaptability of the model within the context of
strategic HR digitalization. Scientific novelty of the study The
scientific novelty of this study lies in formalizing the HRTech Framework with
Employer Branding as a hybrid architecture that synergizes People Analytics, AI
communications, and HCM solutions with brand-oriented modules, thereby providing
cognitive traceability, integrative influence, and affective validity in the
context of digital productivity. Prospects for future research The
prospects for future research entail comprehensive implementation of the HRTech
Framework with Employer Branding in a tangible HR environment, concentrating on
behavioral dynamics, institutional interoperability, and brand-oriented
adaptation. It is advisable to initiate a controlled pilot project with traced
verification of procedural resilience, AIU stability, as well as the framework’s
digital efficacy. |
|
Keywords: |
Employment, Labour, Wage, Economic Empowerment, Smes, Sustainable Development,
Green Job |
|
DOI: |
https://doi.org/10.5281/zenodo.18824280 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
PRIVACY-PRESERVING CUSTOMER DATA MANAGEMENT USING HYBRID AI-CRYPTOGRAPHY MODELS |
|
Author: |
DR. C.NAGA GANESH, DR. MAMATHA G , DR. VIVEK VEERAIAH , DR. DEVIKA RANI ROY ,
AMARJA ADGAONKAR , MRS. SUPRIYA SANJAY AJAGEKAR , ANKUR GUPTA, DR. PATIL AMIT
MADHUKAR |
|
Abstract: |
5G networks, IoT devices, and cloud computing have all made it easier for people
to come up with new ideas, but they have also made systems more vulnerable to
increasingly complicated cyber assaults. Traditional security measures work to
some extent, but they can't handle the complicated, limited resources, and
constantly changing digital environments we live in today. This paper introduces
an innovative framework for creating resilient, scalable, and flexible security
systems via the integration of artificial intelligence and advanced cryptography
methodologies. A hybrid deep learning model that incorporates CNN, LSTM, and
Autoencoders (AE) is utilised to detect threats. This model is more accurate
than baseline methods and has fewer false positives. Dynamic key rotation across
IoT, mobile, and cloud infrastructures may improve cryptographic resilience via
reinforcement learning-driven key management. Blockchain-based trust management
makes transactions and contracts more accountable and open by lowering
communication costs and making sure that model training that protects privacy is
done via federated learning. Experimental evaluations of healthcare, IoT, and 5G
datasets indicate that the system exhibits enhanced accuracy (94.3%), increased
resilience to adversarial attacks (with an attack success rate reduction of
70%), and improved energy efficiency. Multi-objective optimisation has shown
that the framework's performance, scalability, and security balance is good.
This makes it a good choice for next-generation digital infrastructures. This
work aims to include post-quantum solutions and explainable AI into future
advancements, facilitating the convergence of AI and cryptography to enhance
security. |
|
Keywords: |
AI, Cryptography, Federated Learning, Blockchain, Reinforcement Learning, Deep
Learning, Cybersecurity, IoT, 5G Networks, Post-Quantum Cryptography, Trust
Management, Secure Data Management. |
|
DOI: |
https://doi.org/10.5281/zenodo.18824315 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
HYBRID DEEP LEARNING AND OPTIMIZATION-DRIVEN FRAMEWORK FOR ENHANCED SKIN CANCER
DETECTION AND CLASSIFICATION |
|
Author: |
A CHANDRA SEKHAR , D VENKATESH |
|
Abstract: |
Globally, skin cancer is one of the most common diseases caused by excessive
exposure to ultraviolet (UV) radiation. The outcomes and prognosis for treating
skin malignancies significantly improve with early detection and accurate
diagnosis. However, many diagnostic methods have limitations, entail high
computational costs, rely heavily on manual feature extraction, lack good
generalization across datasets, and are susceptible to adversarial attacks. This
paper addresses these limitations in skin cancer diagnosis by proposing an
improved Wavelet-AHE Diffusion Enhanced Hybrid Network (WADE-HNet) for enhancing
detection as well as classification. The proposed ensemble technique integrates
ResEff-FuseNet for feature extraction, Firefly-Bitterling Adaptive Selection
Optimization (FBASO) for optimal feature selection, Multi-Stage Attention
Capsule Network (MSA-CapsNet) for enhanced classification performance, and
Modified U-Net++ for lesion segmentation. Such improvements increase the
interpretability, generalizability, and computational efficiency of model
performance. Finally, empirical results for both the HAM10000 and ISIC 2019
datasets validate that WADE-HNet outperforms benchmark models with a high
accuracy of 99.39%. In this way, the proposed strategy ensures consistency in
clinical usage while reducing false positives and false negatives. |
|
Keywords: |
Skin Cancer Detection, DL, Feature Selection, Capsule Networks, Optimization,
Image Processing, WADE-HNet |
|
DOI: |
https://doi.org/10.5281/zenodo.18824335 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
EFFICIENT COFFEE PLANT DISEASE PREDICTION WITH ADAPTIVE FEATURE SELECTION AND
DEEP CONVOLUTIONAL NEURAL NETWORK |
|
Author: |
P.GOBINATH ,Dr.M.RAMASWAMI |
|
Abstract: |
The problem of predicting disease in coffee plant is briefly analyzed and there
exist numerous techniques around the issue. However, the approaches struggle to
achieve maximum accuracy in predicting possible disease which affects the growth
of plant. Towards this, an efficient Adaptive Invariant Feature Selection and
Approximation based Disease Prediction with DCNN (AIFSADP-DCNN) is sketched. The
method applies Intensity Deviation Normalization technique to normalize the leaf
image and enhance the image quality Further, Color Quantization Segmentation
algorithm is enforced towards segmenting the coffee plant features. Next, color,
texture, and distribution features are extracted from the segmented image.
Concern features extracted are trained with deep convolution neural network with
three layers of convolution and pooling layers. The output layer neurons are
designed to measure Color Constraint Support (CCS), Texture Constraint support
(TCS) and Distribution Constraint Support (DCS) towards various classes of
features. Finally, the method estimates plant disease support (PDS) towards
various classes based on which the method identifies the disease class. The
proposed model improves the performance of disease prediction in coffee plants
with higher accuracy up to 96.5%. |
|
Keywords: |
Disease Prediction, Plant Management, Deep Learning, CNN, AIFSADP-DCNN,
PCS, DCS, TCS, CCS |
|
DOI: |
https://doi.org/10.5281/zenodo.18824350 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
BIG DATA AS A DECISION-MAKING TOOL IN HUMAN RESOURCE MANAGEMENT |
|
Author: |
DMYTRO KOBETS , IRYNA CHERNOVA , TARAS MUKHA , VUGAR SALMANOV , SVITLANA
ALEKSANDROVA |
|
Abstract: |
The relevance of the study is determined by the need for high-performance data
processing architectures to increase the efficiency of human resource management
(HR) in the context of digital transformation and the increasing complexity of
analytical processes. The aim of the research is to create an optimized
data-driven human resource (HR) decision-making architecture with the
integration of machine learning (ML) modules, adaptive pipelines, and streaming
analytics, verified by Unified Modelling Language (UML) and metric comparison.
Research methods: SWOT analysis, metric comparison, structural optimization
decomposition, UML, comparative analysis of UML models, metric comparison of
optimized architecture. Optimized HR Data-Driven Decision-Making architecture
provided an increase in predictive accuracy to 0.962 (+8.4%), metric stability
(+9.1%), and discriminability (+7.8%). SWOT analysis identified 6 strengths, 5
critical weaknesses, 4 risks; comparison confirmed the superiority of SAP
Analytics Cloud (0.94; 0.91; 0.89) over the market average (0.81; 0.78; 0.75).
UML integration of adaptive feature engineering pipelines, cognitively optimized
ML modules, and streaming analytics increased interoperability (+14.6%), modular
integrity (+12.3%) and algorithmic resilience (+15.1%). The academic novelty of
the study is the formalization and verification of an optimized HR Data-Driven
Decision-Making architecture that integrates ML modules, feature engineering,
and stream analytics, providing a predictive accuracy of 0.962 (+8.4%) and
interoperability of +14.6%. Prospects for further research include localized
implementation of the architecture in a test environment with metric and
cognitive functional verification, iterative optimization of modules based on
the results of institutional contextual empirical testing. |
|
Keywords: |
Data Analytics, Human Resource Management, System Architecture Modelling,
Decision-Making Architecture, Data-Driven Analytics, Predictive Modelling,
Organizational |
|
DOI: |
https://doi.org/10.5281/zenodo.18824366 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
DEVELOPING A HYBRID LOG-FILE–BASED COVERT CHANNEL FOR SECURE DIGITAL FORENSIC
DOCUMENT TRANSMISSION |
|
Author: |
TAMER S. FATAYER, SAMY S. ABU-NASER , MUSBAH J. MOUSA |
|
Abstract: |
Security is a fundamental requirement in Digital Forensic (DF) systems, given
the increasing number and sophistication of cyber threats. Among the major
challenges in DF is ensuring the secure transmission of Digital Forensic
Documents (DFDs) between authorized entities. This paper proposes a hybrid
covert channel designed to enhance the confidentiality and un-detectability of
DFD transfers by exploiting runtime errors and system log files. The proposed
method leverages the Digital Forensic Legal Affairs Server (DFLAS) log file,
where original DFDs embedded with covert data are written and subsequently
transmitted through a concealed communication channel. The original files remain
preserved in the server’s log file after transmission, ensuring both integrity
and traceability. To achieve robust security, the hybrid covert channel
integrates randomization, encryption, and controlled packet delay, collectively
reinforcing confidentiality and resistance to detection. The performance
evaluation focuses on analyzing time transmission and bandwidth efficiency
during exchanging the DFDs. Experimental results show that the proposed
channel achieves high efficiency in covert data transmission, with transmission
time and throughput comparable to conventional channels. Additionally, it
provides a high level of confidentiality through the use of encryption and
randomization techniques. The results show that transferring a 112 MB file
requires approximately 4000 ms on the DFLAS side, demonstrating competitive
performance compared with existing approaches. The results further indicate that
the extraction capacity rate varies according to the applied delay intervals
while maintaining practical throughput levels suitable for DF real-world
deployment. The achieved throughput of 29.12 MB/sec confirms the method’s
practicality. The proposed covert channel provides confidentiality,
plausibility, and undetectability, making it a promising approach for secure and
efficient transmission of digital forensic documents. |
|
Keywords: |
Covert channel, Digital forensics, Log file, Undetectability, Encryption. |
|
DOI: |
https://doi.org/10.5281/zenodo.18824392 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
AI-READY VR/360° E-LEARNING PLATFORM FOR DISTANCE EDUCATION: SYSTEM
ARCHITECTURE, DEPLOYMENT, AND ADOPTION EVIDENCE |
|
Author: |
ANASS TOUIMA , MOHAMED MOUGHIT |
|
Abstract: |
The sudden move to distance learning during the COVID-19 pandemic forced
universities to rely heavily on videoconferencing tools and basic learning
management systems. While these solutions kept courses running, many students
reported fatigue, disengagement, and difficulties visualising complex concepts.
In response to these challenges, this study describes the design and preliminary
evaluation of a Virtual Reality (VR) e-learning platform, extended with an
AI-ready facial expression recognition module and a 360° video web alternative,
deployed in a Moroccan higher-education context. The platform allows students to
access an immersive virtual campus and interactive learning scenarios either
through VR headsets or through 360° video on standard devices. A total of 106
students from the Higher Institute of Information and Communication (ISIC)
interacted with the system and then completed a structured questionnaire about
their experience, perceived usefulness, and intention to adopt the platform.
Descriptive analyses were complemented with chi-square and Spearman correlation
tests to explore differences between academic levels and the role of prior
exposure to immersive media. The results show high awareness of VR and 360°
video, strong perceived added value of VR for distance learning, and broad
support for regular use of the platform, with no major differences between
Licence and Master students and only weak associations with previous VR
experience. These findings position the proposed AI-ready VR environment as a
promising complement to conventional distance learning tools and motivate future
controlled studies focusing on objective learning outcomes and the full
integration of AI-driven emotion tracking. |
|
Keywords: |
Distance Learning, Virtual Reality, VR, Artificial Intelligence, Higher
Education |
|
DOI: |
https://doi.org/10.5281/zenodo.18824411 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
METAVERSE-BASED VIRTUAL EDUCATION PLATFORMS USING BLOCKCHAIN FOR CREDENTIAL
VERIFICATION |
|
Author: |
DR. VIVEK VEERAIAH , DR. M.YELLAIAH NAYUDU , DR. MAMATHA G , DR. TARUN DALAL , M
AHMER USMANI , DR. VIPIN JAIN , MR. VINOD MOTIRAM RATHOD , DR. HUMA KHAN |
|
Abstract: |
Digital classrooms now include a lot of safe methods to preserve and exchange
information, as well as engaging technology. This article suggests a Metaverse
virtual school network based on blockchain technology as a way to make sure that
credentials are real and safe. However, much study on blockchain-based
credentialing systems or metaverse learning environments is theoretical or
survey-based. Credential reliability, integration with current systems,
scalability, and user confidence in metaverse learning platforms are little
researched. Studies with an integrated, implementation-oriented approach and
empirical evaluation are beneficial. This research introduces and evaluates a
Metaverse Virtual Education Platform that uses blockchain technology to validate
credentials transparently and securely. Our solution solves the issues that
conventional learning management systems (LMS) have with being transparent and
honest about student information, keeping credentials safe, and persuading
people to utilize them. While there is increasing interest in both
blockchain-based credentials and metaverse-based education, existing solutions
currently lack a cohesive and validated methodology. Secure, dependable, and
easily verifiable academic credentials are paramount within decentralised
virtual learning environments; this system is indispensable for realising these
objectives. The platform's effectiveness has been demonstrated through multiple
trials. The metaverse platform, when compared to its competitors, exhibits a
higher degree of consumer engagement, evidenced by an average usage duration of
41 minutes, in contrast to the 22 minutes observed on alternative platforms, and
a superior satisfaction rating of 4.7 out of 5. Although blockchain effectively
mitigates the risk of identity theft and fraud, it does introduce a slight delay
in the credential validation process; Hyperledger requires 2.1 seconds, while
Ethereum necessitates 1.8 seconds. Furthermore, a security analysis indicates
that unauthorised modifications are more readily identifiable, and MFA is
streamlined. The system's responsiveness is unaffected by concurrent user load.
Navigation, design, effectiveness, credential trust, and user satisfaction
constitute five key usability metrics, and the system demonstrates strong
performance in each area. This research suggests that metaverse designs
incorporating blockchain-based certification systems can facilitate the
development of secure, engaging, and novel online learning environments. |
|
Keywords: |
Metaverse, Virtual Education, Blockchain, Credential Verification, Digital
Credentials, Decentralized Systems, Education Security, Trust Management |
|
DOI: |
https://doi.org/10.5281/zenodo.18824426 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTERPRETABLE DEEP LEARNING BASED DECISION SUPPORT SYSTEM FOR STOCK
FORECASTING AND INVESTMENT RECOMMENDATION USING TEMPORAL FUSION TRANSFORMER |
|
Author: |
D. MANJU , M. KOTESWARA RAO , RAJESH KUM AR VERMA , PADMINI DEBBARMA , ANAND
KUMAR SARASWATHI RATHOD |
|
Abstract: |
The stock market has rapid changes caused by economic conditions, company
performance and investor behavior. Because of this, stock price predictions and
correct investment decisions are hardly achieved. Investors today use
computer-based systems to study large volumes of financial data and find useful
patterns barely perceived using traditional methods. However, most existing
stock prediction and recommendation systems that have some major problems. Many
systems use only technical indicators or only fundamental data, while not both
together. Some models cannot properly capture both the short-term and long-term
price movements of the stock. Moreover, many of these deep learning models
behave like black boxes and fail to explain clearly why any stock is recommended
to be on buy, sell or hold. This therefore compromises trust in their prediction
capabilities, especially those specifically for highly volatile markets. In this
paper, we try to solve those problems by proposing a new stock forecasting and
recommendation system based on the Temporal Fusion Transformer (TFT), combining
historical stock prices, technical indicators and fundamental financial data in
one framework. It can learn both short-term and long-term trends and show which
factor is most important for the prediction. After predicting the price and
other financial indicators, our model extends TFT into a decision-support system
by embedding financial ratio analysis and recommendation logic within the
forecasting pipeline. |
|
Keywords: |
Deep Learning, Investment Recommendation, Stock Forecasting Temporal Fusion
Transformer, Technical Indicators, Fundamental Analysis. |
|
DOI: |
https://doi.org/10.5281/zenodo.18824437 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
USE OF ARTIFICIAL INTELLIGENCE MODELS IN EDUCATIONAL DATA MANAGEMENT TO SUPPORT
PEDAGOGICAL DECISIONS |
|
Author: |
OKSANA KARABIN , NATALIYA LUPAK , VUGAR IBRAGIM OGLY SALMANOV , SERHII
KRYVOSHLYKOV, ZINOVII ONYSHKIV |
|
Abstract: |
The relevance of the topic is determined by the growing role of analytical
systems in modern education and the need to combine forecasting accuracy with
the efficiency and stability of algorithms. The paper compares three
configurations – ML-Base, DL-Advanced and Hybrid-Assist – using the integral
educational analytics effectiveness index (IEAEI), which combines indicators of
accuracy (Acc), stability (Stab), processing time (Time) and interpretability
(Interpret). The methodology included normalisation of results, non-parametric
hypothesis testing (Kruskal–Wallis and Mann–Whitney criteria), analysis of
changes in weight coefficients (ΔW-index) and assessment of correlations between
platform dynamics and forecast stability (Spearman's coefficient). To test the
transferability of the findings, a design with three contexts – Ukraine,
Azerbaijan and Poland – was used, which made it possible to compare the
behaviour of models in countries with transition economies and in European
educational environments. The results showed that the hybrid configuration
provides the best balance between stability and accuracy (IEOA > 0.84;
cross-country: Ukraine – 0.821; Azerbaijan – 0.818; Poland – 0.824) with a
relatively fast adaptation time; DL-Advanced achieves higher maximum accuracy,
especially under conditions of more complete and consistent data (approaching
Hybrid-Assist in the Polish subsample), but requires more time for convergence;
ML-Base has the shortest response time in all three contexts, but is inferior in
terms of predictive quality. The scientific novelty lies in the generalisation
of the patterns of interaction between the algorithm type, platform
characteristics and pedagogical practice requirements in an international
context, which allows for the justified integration of intelligent models into
educational analytics. Prospects for further research include expanding the set
of evaluation metrics with indicators of cognitive convenience for users and
adapting the methodology to cloud and mobile educational solutions, taking into
account cross-country data variability. |
|
Keywords: |
Pedagogy, Knowledge, Learning, Educational Analytics, Ml-Base, Dl-Advanced,
Hybrid-Assist, Ieoa, Δw-Index, Forecast Stability, Pedagogical Solutions,
Non-Parametric Tests |
|
DOI: |
https://doi.org/10.5281/zenodo.18824453 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
Title: |
A COMPARATIVE STUDY OF MACHINE LEARNING FOR AGE GROUP IMPUTATION IN LARGE-SCALE
E-COMMERCE DATA |
|
Author: |
JURON PAIK |
|
Abstract: |
Current privacy regulations and the prevalence of voluntary non-disclosure have
led to significant gaps in demographic data within e-commerce platforms,
severely hindering personalized marketing efforts. This study proposes a machine
learning–based approach to address the problem of missing age group information
in large-scale e-commerce platforms. Resolving this issue can enhance the
personalization potential while respecting user privacy constraints. The dataset
used in this research comprised actual e-commerce platform data, including
customer behavior logs, product attributes, and temporal and regional variables.
Initially, five classification models—logistic regression, decision tree, random
forest, k-nearest neighbors (knn), and XGBoost—were compared. Preliminary
experiments revealed that logistic regression performed relatively poorly;
therefore, it was excluded from the final comparison, and hyperparameter
optimization was performed on the remaining four models. Model performance was
evaluated on the validation set using accuracy and F1-score as the primary
metrics. Experimental results showed that the random forest (default
configuration) achieved the highest performance with approximately 78\%
accuracy, while XGBoost, although underperforming in the default setting,
improved to a comparable level after optimization. In contrast, decision tree
and knn showed limited improvement from optimization, with performance in some
cases declining compared to the default setting. Feature importance analysis
identified behavioral frequency, a specific event type, gender, and product
attributes as key factors. This research contributes by empirically
demonstrating the feasibility of constructing age group prediction models using
large-scale e-commerce data, thereby offering a practical strategy for
addressing missing demographic information under privacy constraints.
Furthermore, the feature importance analysis provides actionable insights for
target marketing and personalized recommendation system design. Conclusively,
this study empirically demonstrates that behavioral logs alone are sufficient to
predict demographic attributes with high accuracy. The proposed Random
Forest-based framework offers a cost-effective and privacy-preserving
alternative to complex deep learning models for practical deployment in
real-world e-commerce systems. |
|
Keywords: |
Age Group Prediction, Machine Learning, Privacy-Preserving Data Analysis,
Personalized Recommendation, E-commerce Data |
|
DOI: |
https://doi.org/10.5281/zenodo.18824465 |
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2026 -- Vol. 104. No. 4-- 2026 |
|
Full
Text |
|
|
|