|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
February 2026 | Vol. 104
No.3 |
|
Title: |
SATELLITE BASED ASSESSMENT OF SOIL HEAVY METAL CONTAMINATION USING DEEP LEARNING
AND SWARM INTELLIGENCE |
|
Author: |
NAGA SANTHA KUMARI CHEETI,GOVADA ANURADHA |
|
Abstract: |
Coastal–deltaic ecosystem are sensitive interfaces where riverine and marine
processes interact, resulting in long-term activity of heavy metal
contamination. Heavy metals, such as Cd, Pb, Hg, As and Cr tend to accumulate in
soil and sediments that may affect the ecosystems and human health. Traditional
point-based monitoring techniques have been widely applied for assessing spatial
and temporal burden due to contamination at focused site but may be scale
limited and resolution limiting. Despite extensive studies on heavy metal
contamination using either geostatistical methods or standalone machine and deep
learning models, existing literature lacks an integrated analytical synthesis
that systematically evaluates optimization-driven ML–DL frameworks tailored to
the non-linear, heterogeneous, and hydrodynamically active nature of
coastal–deltaic ecosystems. This review addresses this gap by critically
analyzing how swarm-intelligence-optimized learning architectures improve
predictive reliability, interpretability, and scalability when fusing satellite,
soil, and hydrological data. The study contributes new knowledge by establishing
a unified conceptual framework that links algorithmic optimization behavior with
delta-specific environmental processes, thereby advancing early-warning and
decision-support.Advances in remote sensing, Machine Learning (ML), Deep
Learning (DL) and Swarm Intelligence (SI) in recent years have synergetically
contributed the development of intelligent data-driven techniques for predictive
environmental modeling. Adding optimization techniques like PSO, GWO, and ACO in
the frame work enriches Model calibration, Accuracy performance of model and
reinterpreting phenomenon. This survey focuses on ML–DL–SI optimization-driven
frameworks that integrate satellite imagery for heavy metal prediction. These
unified, explainable systems promise improved early detection and sustainable
management of contamination in fragile coastal–deltaic environments. |
|
Keywords: |
Coastal–Deltaic Ecosystems;Heavy Metal Contamination;Machine Learning (ML);Deep
Learning (DL);Swarm Intelligence (SI);Optimization Algorithms. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666451 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
QUANTIFYING THE UNCERTAINTY OF FINANCIAL DISTRESS SEARCH BEHAVIOR: A HYBRID LSTM
AND MONTE CARLO DROPOUT APPROACH FOR THE PAWNING INDUSTRY |
|
Author: |
WANDA WANDOKO, ROBERTUS NUGROHO PERWIRO ATMOJO, IGNATIUS ENDA PANGGATI |
|
Abstract: |
This study aims to develop a decision support model to predict public interest
trends in Indonesian pawning services while simultaneously quantifying
uncertainty risks through a hybrid framework. Utilizing daily Google Trends
time-series data as a real-time proxy for pawning demand over a five-year period
(November 2020–October 2025), this research develops a hybrid model. It
integrates Long Short-Term Memory (LSTM) to map complex non-linear patterns and
long-term dependencies, with Monte Carlo simulation (via the MC Dropout
technique) to generate probabilistic forecasting. Empirical results demonstrate
that the deterministic LSTM component achieved "Highly Accurate" performance,
measured by a Mean Absolute Percentage Error (MAPE) of 7.90% on the test set.
The MAPE results prove the model's capability in separating fundamental trend
signals from daily noise. Furthermore, the probabilistic Monte Carlo component
successfully transformed single-point forecasts into measurable risk
distributions. The model proved effective as an anomaly detector, identifying
market surprises when actual values fell outside the 95% Confidence Interval.
The novelty of this research lies in the hybrid integration of deep learning and
stochastic Monte Carlo simulation applied to the underexplored pawning financial
sector. This study contributes theoretically to time-series forecasting and
pawning literature. Additionally, it offers managerial contributions,
particularly for stakeholders in the pawning industry, by providing a tool for
risk-based decision-making. |
|
Keywords: |
Time-Series Forecasting, Long Short-Term Memory (LSTM), Monte Carlo Simulation,
Google Trends, Pawning Industry, Risk Management. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666479 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
PHISHING DETECTION WITH SELF-ATTENTION AND NEURAL ODE LAYERS USING A
TENSORFLOW-BASED DEEP LEARNING APPROACH |
|
Author: |
R ADINARAYANA , G VAMSI KRISHNA |
|
Abstract: |
Phishing remains the largest cyber security problem with which we have to put up
with today and is one of the most common means under which users are deceived
into providing their personal information (e.g., usernames, passwords, and
credit card) because it can be cheaply mass-produced. The paper proposes an
effective deep learning-enabled phishing detection model, which jointly applies
self-attention modules and re-layered evaluated neural ordinary differential
equation (ODE) blocks to achieve customizable traditional phishing detection
systems. Multithread Self-Attention mechanism allows the model to attend which
are the most relevant features within complex datasets, and capture elaborate
relationships between categorical and numerical features. The use of Neural ODEs
adds the possibility to model in continuous time, having the system getting
information on dynamic and changing patterns of phishing attempts. The
architecture also uses more elaborate preprocessing layers including
normalization of numeric features and one-hot encoding pattern for categories,
providing a powerful means to represent different sorts of data. Regularization
techniques such as the weighted activation scaling (SELU) and Alpha Dropout add
to the stability of the learning process by preventing over fitting. The model,
which has been built in Tensor Flow, outperforms traditional models (e.g., KNN,
logistic regression and SVM) with an accuracy of 96.1% and AUC = 0.99.
These results demonstrate that the model generalizes well on unseen samples,
which would be a scalable robust solution for phishing detection. This work
contributes to advancements in cyber security by showing that self-attention and
Neural ODEs can be usefully combined to meet the demands of an ever-evolving
crime. |
|
Keywords: |
Phishing Detection, Self-attention, Neural ODE, Deep Learning, Tensor Flow,
Cyber Security |
|
DOI: |
https://doi.org/10.5281/zenodo.18666499 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
PPBEFL: PRIVACY-PRESERVING BLOCKCHAIN ENABLED FEDERATED LEARNING FOR HEALTHCARE
DATA SECURITY |
|
Author: |
TANISHA BHARDWAJ , SUMANGALI K |
|
Abstract: |
Blockchain (BC)-based platforms have emerged as a result of the technology's
explosive expansion., each with distinct structures and consensus mechanisms.
This has heightened the focus on blockchain interoperability, facilitating
interactions between different platforms. In decentralized learning
environments, maintaining security, transparency, and trust is particularly
challenging, particularly in delicate sectors like financial services and
healthcare, where data centralization is not a viable option. The
Blockchain-Enabled Federated Learning (BEFL) approach, which combines blockchain
technology with this work, federated learning (FL) is provided as a dependable
and effective alternative. By using an aggregation technique to eliminate
anomalous model parameters and integrating blockchain procedures and privacy
strategies to synchronize client privacy protection, PPBEFL protects against
poisoning attempts. The PPBEFL model trains the client’s datasets using local
models of the Visual Geometry Group 19 (VGG19) for images and a Convolutional
Neural Network (CNN) for the dataset. The updates are serialized to avoid
concurrency issues and recorded immutably on the blockchain. Aggregated updates
refine a global model iteratively in a transparent and verifiable way. Heart
Disease (HD) and Breast Cancer (BC) Image Datasets (Curated Breast Imaging
Subset of Digital Database for Screening Mammography (CBIS-DDSM)) are used for
experiments, Brain Tumor Magnetic Resonance Imaging (MRI) Dataset, and Breast
Cancer Wisconsin (Diagnostic) comparison with typical FL schemes like FedAvg,
Federated Learning with Multi-Party Computation (FL-MPC), Federated Learning
with Robust Aggregation in Edge Computing (FL-RAEC), and the privacy-preserving
and efficient FL (PEFL) all show improved defense against different types of
attacks based on metrics like throughput, precision, loss, and delay. |
|
Keywords: |
Blockchain, Consensus Mechanism, Differential Privacy (DP), Federated Learning
(FL), privacy security, Neural Network, Data protection, Decentralized learning,
Privacy-Preserving Blockchain Enabled Federated Learning (PPBEFL). |
|
DOI: |
https://doi.org/10.5281/zenodo.18666506 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
v
|
Title: |
STRUCTURAL CAUSES OF THE DIGITAL DIVIDE AND ITS LONG-TERM CONSEQUENCES FOR
ACADEMIC ACHIEVEMENT |
|
Author: |
MOHAMMAD BANI YOUNES, ISSA ALSMADI, OMER ABUSHQEER, NJOOD ALJARRAH, RAZAN
OBEIDAT |
|
Abstract: |
The rapid digitalization of education has reshaped learning environments but
also exacerbated longstanding social inequities, creating a multidimensional
digital divide. This study examines the structural determinants of digital
exclusion, differences in access, skills, and usage, and evaluates their
influence on students’ academic performance. Using a mixed-methods design, we
analyze micro-level data from the 2018 Program for International Student
Assessment (PISA) alongside a thematic synthesis of recent case studies.
Quantitative results show a strong positive association (p < 0.01) between a
composite Digital Access Index and achievement in mathematics, reading, and
science, independent of socioeconomic status. The qualitative findings reveal
three recurring determinants: socioeconomic and infrastructural constraints,
disparities in teacher digital preparedness, and variations in sociocultural and
digital capital. Integrating both strands of evidence, the study demonstrates
how digital exclusion contributes to persistent educational disadvantages and
constrains long-term opportunities in higher education and the labor market. Our
study that addresses the digital divide requires coordinated reforms targeting
infrastructure, digital competency development, and equitable pedagogical design
to prevent the further entrenchment of digital stratification. |
|
Keywords: |
C++, Code Reusability, Template Programming, Generic Programming, Function
Templates, Class Templates, Standard Template Library (STL), Compile-Time
Polymorphism, Template Metaprogramming. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666522 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
UTILIZING GPT 3.5 FOR ARABIC INTENT CLASSIFICATION WITH PROMPTING |
|
Author: |
FATMA JAMAL HABIB HABIB , SALLY S.ISMAIL , ABEER M.MAHMOUD |
|
Abstract: |
Intent classification in natural language processing (NLP) is crucial for
identifying user intents from their utterances. This task is particularly
challenging for Arabic due to its complex morphology and high ambiguity. Recent
advancements in NLP, including deep learning and transfer learning, have
improved Arabic intent classification. However, a significant gap still exists
compared to major languages like English. This paper proposes a new Arabic
intent classification based on 1-shot and 3-shot techniques for investigating
prompting strategies for Arabic in specific .The ’Temperature’ parameter is
employed to modulate probability distributions and guide the model in
classifying Arabic sentences into predefined categories. Results show that
prompting, especially contrastive with dynamic reasoning, outperforms
fine-tuning in both accuracy and resource efficiency, high- lighting the
effectiveness of prompting techniques in enhancing Arabic NLP applications.
Contrastive with Dynamic reasoning reported accuracy of 0.98% for 1- shot and
100% for 3-shots while fine tuning recorded 0.87% for training of the used
Arabic intent dataset. |
|
Keywords: |
Natural Language, Prompting, Intent classification, Few shots, Deep learning,
Transfer learning, Softmax, Temperature Scaling, Morphology, Ambiguity |
|
DOI: |
https://doi.org/10.5281/zenodo.18666533 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
MODELING THE EFFECTS OF CHATBOT RESPONSIVESS, USEFULNESS, AND PERCEIVED RISK ON
E-COMMERCE USER SATISFACTION AND BEHAVIORAL INTENTION |
|
Author: |
AUDREY ESTHER LITA , VIANY UTAMI TJHIN |
|
Abstract: |
As e-commerce grows quickly in Indonesia, more businesses are using AI chatbots
to make their services more efficient and available. These chatbots are meant to
provide quick, useful answers that can improve the customer experience and
encourage people to keep using the service. Still, many users worry about how
reliable chatbots are, whether their data is safe, and if the responses are
accurate. These concerns can make people feel there is a high risk, which may
lower their satisfaction and make them less likely to keep using the service.
Although past studies have looked at these issues one at a time, there is
research that examines them together. This study aims to analyze the influence
of chatbot responsiveness, usability, and risk perception on user satisfaction
and behavioral intentions, and to examine the mediating role of satisfaction in
these relationships. This study used a quantitative approach with 403 active
e-commerce users from Shopee, Tokopedia, Lazada, and Blibli. Data analysis was
carried out using the Partial Least Squares Structural Equation Modeling
(PLS-SEM) method with SmartPLS 4. To add depth to the findings, qualitative
guerrilla interviews were also conducted to capture real user experiences. This
result found that when chatbots are responsive and easy to use, customers are
more satisfied and more likely to act positively. However, if customers feel
there is risk, their satisfaction drops. Customer satisfaction partly explains
how these factors affect what people do, but it does not account for everything.
The study showed that when chatbots are responsive and helpful, users are more
satisfied and more likely to use them again. However, if users feel there is a
high risk, their satisfaction drops. These results help explain how people in
emerging markets adopt AI chatbots and offer useful guidance for e-commerce
companies looking to design more trustworthy, reliable chatbots. |
|
Keywords: |
E-Commerce, Chatbot, Responsiveness, Usefulness, Perceived Risk, User
Satisfaction |
|
DOI: |
https://doi.org/10.5281/zenodo.18666543 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
THE BEHAVIOUR OF THE TWO LANES OF A QUASI-ONE-DIMENSIONAL SYSTEM CONNECTED BY AN
ANISOTROPIC NODE: STUDY OF A MERGING NODE |
|
Author: |
AHLAM EL ATTARI, KAMAL JETTO2, ZINEB TAHIRI, ANASTASIA GENNAD’EVNA SHEVTSOVA,
ABDELILAH BENYOUSSEF, ABDALLAH EL KENZ |
|
Abstract: |
In the light of the growing popularity of the so-called hybrid or multimethod
modelling approaches, we attempted to solve a selected problem in the field of
modelling transport processes using the AnyLogic general purpose-modelling tool.
Using the example of the controlled selected intersection, the procedure of
creating a model with a more detailed analysis of its selected parts was
discussed. Later on, the same model is investigated in the light of statistical
physics. Using a cellular automaton model, we studied the response of a
quasi-one-dimensional system to an anisotropic merging node. The comparison of
the two simulations showed non-usual behaviours through the latter. Accordingly,
following the patterns of Vehicles’ extraction, we found that the exits of the
two lanes are correlated, and a self-organization process governs the node. A
cross-correlation test proved that the flows of both lanes are interdependent.
Finally, we brought to evidence the importance of cellular automata models in
understanding the process of traffic breakdowns in comparison to hybrid
modelling programs. |
|
Keywords: |
Traffic Flow, Anisotropy, Anylogic; Merge, Correlation, Self-Organization. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666550 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ENERGY-EFFICIENT AND QOS-OPTIMIZED IOT ROUTING USING PROPOSED QEER-MFO HYBRID
ALGORITHM |
|
Author: |
S. BHARATHI, DR. D. MARUTHANAYAGAM |
|
Abstract: |
The Internet of Things' (IoT) rapid expansion has transformed connectivity in a
number of industries, including smart cities, healthcare, transportation, and
agriculture. However, because of their dynamic topology, device heterogeneity,
and limited power resources, IoT networks continue to confront difficulties with
energy management, routing efficiency, and Quality of Service (QoS).
Conventional routing protocols sometimes don't adjust to the changing needs of
IoT environments because they were mostly created for static or homogeneous
networks. In order to overcome these constraints, this study suggests a
brand-new hybrid routing architecture called QoS and Energy-Efficient Routing
utilizing Moth-LSTM Optimization (QEER-MFO), which combines Long Short-Term
Memory (LSTM) networks with the Moth-Flame Optimization (MFO) method. While the
LSTM model forecasts network characteristics like traffic congestion and node
energy depletion to enable proactive routing decisions, the MFO component makes
sure that the best path is chosen through adaptive exploration and exploitation,
reducing energy consumption and transmission latency. The Trust-Aware IIoT
Routing Dataset is used to assess the suggested method, and the ns3-gym
interface is used to simulate it in NS-3 combined with Python. According to
experimental data, QEER-MFO outperforms current bio-inspired and conventional
routing protocols, resulting in increased packet delivery ratios, decreased
latency, and extended network lifetime. For scalable and intelligent IoT
ecosystems, this hybrid architecture provides a self-learning, predictive, and
energy-efficient routing approach. |
|
Keywords: |
Internet of Things (IoT), Quality of Service (QoS), Energy Efficiency,
Bio-Inspired Optimization, Moth-Flame Optimization (MFO), Long Short-Term Memory
(LSTM), Hybrid Routing, Machine Learning, NS-3 Simulation and Predictive Network
Modeling. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666562 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
RHYTHM-GUIDED CONVOLUTIONAL NEURAL NETWORK SEGMENTAL TRANSITION
ENTROPY-MODULATED LESION DISCRIMINATION COTTON LEAF DISEASE IDENTIFICATION |
|
Author: |
K. KARTHIGA , Dr. B. RAJDEEPA |
|
Abstract: |
Cotton leaf disease presents significant diagnostic challenges due to
inconsistent symptom expression, irregular lesion geometry, and background
interference in open-field imagery. These issues reduce the reliability of
conventional detection systems, which often struggle with symptom fragmentation,
visual overlap, and class imbalance. To address these limitations, this research
proposes a biologically inspired architecture titled CO-CNN (Caterpillar
Optimization-Based Convolutional Neural Network). The model incorporates
rhythm-guided adaptation, segmental coordination, and entropy-modulated
parameter transitions to regulate convolutional behavior based on lesion-driven
visual entropy. Unlike fixed-structure models, CO-CNN refines kernel scaling,
dropout positioning, and gradient modulation in response to region-specific
complexity, ensuring stability across asymmetric lesion patterns. The network
performs segment-wise adjustment to retain lesion detail while suppressing
irrelevant activations arising from field-induced noise. In evaluation, CO-CNN
achieves 89.828% Overall Detection Efficiency and reduces Total Prediction
Deviation to 10.172%, outperforming state-of-the-art models under identical test
conditions. These results validate its ability to classify cotton leaf diseases
with high sensitivity and precision, especially in real-world datasets where
symptom boundaries are diffuse, visual distortions are common, and class
distributions remain uneven. |
|
Keywords: |
CO-CNN, Cotton Leaf Disease, Entropy-Modulated Learning, Segmental Adaptation,
Rhythm-Guided CNN, Lesion Discrimination |
|
DOI: |
https://doi.org/10.5281/zenodo.18666572 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED LEARNING WITH QUANTUM-INSPIRED BOLTZMANN WEIGHTING: ENABLING SECURE
AND ROBUST HEART DISEASE PREDICTION |
|
Author: |
MANJIT SINGH, MONG-FONG HORNG, CHUN-CHIH LO, D.VETRITHANGAM, SIVA SHANKAR |
|
Abstract: |
Cardiovascular disease prediction using machine learning faces critical
methodological challenges, including data leakage from improper preprocessing
sequences, arbitrary feature subset selection without empirical validation, and
privacy vulnerabilities in centralized model aggregation. This research proposes
a federated quantum-enhanced learning framework that addresses these gaps
through: (1) site-specific local preprocessing with global stratified train/test
splitting, training-only scaling, and training-only SMOTE application to
eliminate data leakage; (2) federated feature selection combining per-site
RandomForest importance computation, server-side global aggregation, and multi-k
evaluation (k ∈ {5, 7, 9, 11, 13}) with regularized optimization to determine
empirically-validated optimal feature subsets (k* = 11); (3) quantum-inspired
Boltzmann-weighted secure aggregation (weights ∝ exp(−β·Loss_i)) with
convergence monitoring and CTGAN-based generative augmentation for robustness to
heterogeneity; and (4) convergence speed metrics tracking accuracy history to
identify earliest rounds achieving near-optimal performance, enabling
computational efficiency gains. The proposed architecture employs a federated
Random Forest classifier (100 trees) trained at each site on CTGAN-augmented
data using an empirically optimized 11-feature subset, while maintaining patient
data locally at each institution and enabling collaborative model training
through encrypted parameter exchange using quantum-safe cryptography.
Experimental evaluation on the UCI/Kaggle heart disease dataset demonstrates
superior performance compared to a centralized Logistic Regression
baseline(99.3% accuracy with AUC 0.9967 for the federated approach vs. 91.8%
accuracy for centralized Logistic Regression training), enhanced privacy
guarantees through lattice-based LWE-256 quantum-resistant encryption, improved
robustness across heterogeneous sites (cross-site performance variance <0.3%),
and 28% faster convergence (48 rounds vs. 67 rounds for standard FedAvg). This
work advances the state-of-the-art in privacy-aware, distributed cardiovascular
disease prediction and is suitable for real-world multi-institutional clinical
deployment. |
|
Keywords: |
Federated Learning, Quantum Computing, Heart Disease Prediction,
Privacy-Preserving Machine Learning, Feature Selection, Secure Aggregation,
Distributed Healthcare |
|
DOI: |
https://doi.org/10.5281/zenodo.18666578 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
A COMPREHENSIVE ANALYSIS ON DEEP LEARNING TECHNIQUES FOR ELECTRO ENCEPHALOGRAM
SIGNAL ANALYSIS AND APPLICATIONS |
|
Author: |
SYDA NAHIDA , SHAIK SAGAR IMAMBI |
|
Abstract: |
Electroencephalogram (EEG) analysis has emerged as a cornerstone for
understanding brain dynamics and diagnosing neurological and cognitive
disorders. While recent years have witnessed a surge in deep learning–based EEG
studies reporting high classification accuracy, existing literature largely
remains descriptive and model-centric, offering limited critical insight into
why certain approaches succeed or fail under real-world EEG constraints such as
non-stationarity, inter-subject variability, and low signal-to-noise ratios.
This gap limits both scientific understanding and clinical translation.This
survey addresses this limitation by presenting a critical and integrative
analysis of deep learning techniques for EEG signal processing, moving beyond
architectural taxonomies to examine their methodological assumptions,
robustness, interpretability, and deployment feasibility. Unlike prior surveys,
this work systematically evaluates how preprocessing strategies, feature
learning paradigms, and emerging architectures—such as attention mechanisms,
transformers, and graph neural networks—interact with the intrinsic properties
of EEG signals.The study synthesizes recent advances to identify unresolved
challenges in generalization, explainability, and data scarcity, while
highlighting emerging trends such as self-supervised learning, multimodal
fusion, and privacy-preserving federated frameworks. By framing deep
learning–based EEG analysis through a critical lens, this survey provides
actionable insights for researchers and clinicians, and contributes new
conceptual understanding required for the development of reliable,
interpretable, and clinically deployable EEG-based AI systems. |
|
Keywords: |
EEG Signals, Deep Learning, Brain Wave Analysis, Neural Activity Classification,
Machine Learning Algorithms, Emotion Recognition, Neurological Disorder
Detection |
|
DOI: |
https://doi.org/10.5281/zenodo.18666600 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
REAL-TIME ARRHYTHMIA DETECTION FROM WEARABLE ECG DEVICES USING LIGHTWEIGHT 1D
CNN AND EDGE AI DEPLOYMENT |
|
Author: |
SHANMUGA SUNDARI M , T. LAKSHMI PRAVEENA , PUSHPA BHUPATIRAJU , ASHA VUYYURU ,
BUKKACHARLA KISHORE KUMAR , B. GOVARDHAN SIVA SAI |
|
Abstract: |
Cardiovascular diseases (CVDs) are still among the most prevalent causes of
death globally, making it important to have quick, constant, and non-invasive
cardiac monitoring. In this paper, a real-time system for arrhythmia detection
using wearable ECG sensors combined with a low-power one-dimensional
Convolutional Neural Network (1D-CNN) optimized for Edge AI deployment is
introduced. The model is tailored to run efficiently on low-power wearables like
smartwatches and ECG patches. A publicly available dataset, the MIT-BIH
Arrhythmia Database, was used for training and validation. Preprocessing
involved band-pass filtering (0.5–40 Hz), z-score normalization, and
segmentation in 2-second windows. The 1D-CNN architecture with four
convolutional blocks and a softmax classifier had an accuracy of 98.72 %,
precision of 97.85 %, recall of 98.10 %, and F1-score of 0.981 on the test set.
Model pruning and quantization-based edge optimization decreased memory
consumption by 42 % and inference latency by 35 %, making it possible for
real-time processing at 62 frames per second on the Raspberry Pi 4 platform. The
system showed dependable skill in distinguishing different arrhythmias,
including left bundle branch block (LBBB), atrial fibrillation (AF), and
premature ventricular contraction (PVC). As a result, this pioneering method
offers a technically low-power and readily scalable way to continuously monitor
cardiac health, which can be used in both hospitals and patients homes. |
|
Keywords: |
Arrhythmia Detection, Wearable ECG, 1D Convolutional Neural Network, Edge AI,
Real-Time Inference, MIT-BIH Dataset, Embedded Health Monitoring, Quantization,
TinyML |
|
DOI: |
https://doi.org/10.5281/zenodo.18666631 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
THE ROLE OF INFORMATION TECHNOLOGY IN THE DEVELOPMENT OF CORPORATE
COMMUNICATIONS AND PR |
|
Author: |
HASSAN ALI AL- ABABNEH, ZEYAD M. JAMHAWI, AHMED ALI OTOOM, AL REFAI MOHAMMED N,
JAMEEL AHMAD KHADER, GANNA MYROSHNYCHENKO |
|
Abstract: |
The paper is aimed at the investigation of the information technology in
corporate communications and public relations development in the digital
transformation period. The importance of the study is determined by the need to
develop system solutions that enable organizations to organize transparent,
sustainable interaction and grow trust on the part of stakeholders. The aim of
the paper is to create a model on how IT tools can be employed in corporate
communication processes and PR management activity. The methodological framework
of the study is modeling: conceptual model that incorporates major components
(technological infrastructure, digital channels, analytical systems, feedback
processes) and their relationship with each other was developed through
analyzing existing theoretical concepts and large international companies’
applied practices for 2021-2024. The model provides the possibility of
estimating the IT deployment influence on effectiveness of communications by
characteristics such as reaction rate response, coverage, customization and
content risk management reputation. The results of the study indicate that this
developed model can be an instrumental tool to help in corporate communication
planning and monitoring, as well in adapting strategies based on profiles of
potential target publics. What is unique in the work, is the creation of a deep
theoretical structure that contain technology and organization aspects with
respect to which IT effects on corporate communication can be traced. There are
practical implications of these findings for managing organizations and public
relations practitioners working on digitally interacting with stakeholders that
are looking to amplify the effectiveness of digital communication by utilizing
communication activities. |
|
Keywords: |
Information Technology; Corporate Communications; Public Relations; Digital
Transformation |
|
DOI: |
https://doi.org/10.5281/zenodo.18666653 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
A NOVEL IOT BASED PEST IDENTIFICATION AND CLASSIFICATION USING CNN AND LSTM |
|
Author: |
ARUNAPRIYA. R , Dr.S.P.VALLI |
|
Abstract: |
Citrus crops are often damaged by pest a infestation, which is a persistent
problem for growers and often leads to significant declines in output and
profitability. Traditional pest detection techniques rely on direct observation
by field workers, which is labour-intensive, time consuming and sometimes
unreliable. In order to detect and categories citrus pests in real time, this
study proposes an internet of things (IoT) based automated bug detection system
that makes use of convolutional Neural Networks (CNN) and Long Short Term Memory
(LSTM) networks. Images from Iot-enabled field cameras are processed using a
lightweight hybrid deep learning techniques to extract features and make fast
conclusions on devices with limited processing power. Six important citrus pest
species are represented by the 1,200 images in this experimental dataset which
was collected from a citrus research station in Tamil Nadu, India. With an
accuracy of 95.4%, the evaluation results demonstrate that our hybrid CNN-LSTM
architecture performs better than both traditional CNN and conventional machine
learning models. By facilitating automated field monitoring early insect
detection and timely decision making the proposal method reduces crop losses and
the need for pesticides. |
|
Keywords: |
Artificial Intelligence (AI), Internet of Things (IoT), Citrus, CNN, LSTM, Pests |
|
DOI: |
https://doi.org/10.5281/zenodo.18666661 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCING CLOUD RESOURCE UTILIZATION THROUGH INTELLIGENT TASK MIGRATION AND
ADAPTIVE THRESHOLD LOAD BALANCING MECHANISM |
|
Author: |
CHARUL BHANAWAT , MANOJ KUMAR JAIN |
|
Abstract: |
Cloud computing has become the most popular way to provide services and tools
for computers over the internet. The proposed method uses a dynamic
threshold-based algorithm that constantly checks the server's load and moves
jobs around to avoid hotspots. We use a hybrid load balancing method that takes
the best parts of both static and dynamic algorithms and keeps transfer costs as
low as possible. The Adaptive Threshold Load Balancing (ATLB) is used to
transfer and balance jobs in cloud computing. ATLB can improve cloud resource
management by streamlining migration decisions and altering criteria. The
adaptive threshold adjusts load thresholds depending on real-time system data.
The results of our experiments show that our method cuts average response time
by 34.7% and makes better use of resources by 28.3% when compared to the most
common round-robin and least-connection methods with 96.2% efficiency, the
suggested system keeps the load balanced even when the amount of work changes.
The comprehensive simulations used to test our approach's performance in
different types of cloud settings confirm that it works. |
|
Keywords: |
Cloud Computing, Task Relocation, Load Balancing, Resource Usage, Time
Optimization |
|
DOI: |
https://doi.org/10.5281/zenodo.18666670 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
REAL-TIME FOOD CLASSIFICATION USING VGG19 WITH GRAD-CAM VISUALIZATION FOR
ENHANCED INTERPRETABILITY |
|
Author: |
D. LEELA DHARANI, Dr.YERUVA JAIPAL REDDY, Dr.VANKUDOTHU MALSORU, RAVIKIRANREDDY
KANDADI, Dr..KUNAPULI SIVA SATYA MOHAN, Dr.GARLAPATI NARAYANA, Dr VENKATA
RAMANA.N, JYOTHSNA PALAGANI, Dr. SIVA KUMAR PATHURI, Dr D. HARITHA |
|
Abstract: |
As nutritional disorders increase, because of unbalanced diets and numerous
cuisines available on markets, physical and mental health of humans is more
likely to be ruined. Appropriate food categorization is essential in tracking
food consumption and sensitizing nutrition. The following paper proposes a food
recognition model based on the VGG19 deep convolutional neural network with the
incorporated Grad-CAM (Gradient-weighted Class Activation Mapping) visualization
of the predictions made by the model to improve its interpretability. The model
is tested against extensive dataset of 53 dishes in international cuisine on
which both non-vegetarian and vegetarian food items are found. VGG19 is chosen
because it performs well in feature detection and Grad-CAM gives the areas on
each image that affect the decisions made by the model, making them more
transparent, and trustworthy. This explains AI model enables users and
researchers to visualize and check classification results. As measured by
experimental analysis, the enhanced VGG19 classifier with Grad-CAM achieves an
efficient accuracy of 96 percent, which is better compared to the traditional
machine learning models including Naive Bayes and Decision Tree. This proposed
approach is effective in real-time applications monitoring food intake and diet
assessment. |
|
Keywords: |
Nutrition Disorder, Grad Cam, VGG-19, Naive Bayes , Decision Tree. |
|
DOI: |
https://doi.org/10.5281/zenodo.18738472 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
MICROCHAETUS RAPPI-INSPIRED SECURE AND REGENERATIVE ROUTING PROTOCOL (MR-SRRP)
FOR ADAPTIVE AND ENERGY-EFFICIENT CLOUD COMMUNICATION NETWORKS |
|
Author: |
VATCHALA B , PREETHI G |
|
Abstract: |
Cloud communication networks require routing mechanisms that sustain security,
adaptability, and energy efficiency under rapidly changing virtual workloads and
dense topologies. Conventional routing approaches experience route instability,
excessive energy usage, and delayed recovery in large-scale cloud environments.
This study presents the Microchaetus Rappi-Inspired Secure and Regenerative
Routing Protocol (MR-SRRP), a biologically driven framework that models
regenerative behavior and energy-conserving movement patterns of Microchaetus
rappi within an AODV-based routing structure. The goal of this work is to design
a self-healing and secure routing protocol capable of restoring disrupted paths,
balancing energy consumption, and maintaining low-latency communication. The
underlying hypothesis assumes that embedding biological regeneration and
adaptive intelligence into cloud routing decisions enhances route stability,
packet reliability, energy utilization, and cryptographic diffusion strength.
The protocol integrates encrypted tunneling, trust-driven authentication,
regenerative key scheduling, and dynamic route rejuvenation across twelve
coordinated phases. Simulation evaluation demonstrates improved delay, packet
delivery, throughput, and energy conservation, with an avalanche diffusion
strength of 86.14%, validating the protocol’s effectiveness for adaptive and
secure cloud communication environments. |
|
Keywords: |
Routing, Cloud, Encryption, Security, Avalanche, Energy |
|
DOI: |
https://doi.org/10.5281/zenodo.18666685 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
INTELLIGENT OPTICAL CHARACTER RECOGNITION THROUGH CNN-LSTM FUSION WITH
DICTIONARY VALIDATION AND ERROR CORRECTION |
|
Author: |
A.NARESH KUMAR , S.APARNA |
|
Abstract: |
OCR has revolutionized the process of text extraction and digitization, which is
playing a key role in industries including document processing, healthcare, and
finance. Although such models have been developed, conventional OCR systems are
usually not capable of handling mixed, low quality, and noisy data. To overcome
these limitations, the hybrid CNN-LSTM model combined with a dictionary-based
confidence validation algorithm is proposed in this paper, where Convolutional
Neural Networks (CNN) is used to extract spatial features effectively and Long
Short-Term Memory (LSTM) networks to learn sequential data. Standard data
preprocessing algorithms such as normalization, augmentation and Isolation
Forest based outlier detection are applied to streamline the input data. A
finite automata model represents the flow of data, and this gives a structured
view of the model transitions. Also, a new confidence validation algorithm
compares predictions to a medical dictionary, correcting low-confidence
predictions, thus minimizing false predictions. An edit distance-based
correction algorithm with first-character matching constraints further enhances
accuracy by intelligently correcting OCR misrecognitions within a two-character
tolerance, achieving only 1.4% false correction rate. The entire system of
preprocessing has resulted in 6.3% increase in accuracy when compared to simple
methods of normalization. This research methodology has significantly enhanced
high text recognition accuracy and reliability to more efficient OCR systems
with the capability to be tailored to meet arduous real-world environments in
applications requiring high accuracy like domain-specific applications. |
|
Keywords: |
Optical Character Recognition, Hybrid CNN-LSTM Mode, Feature Extraction, Medical
Text Recognition, Edit Distance Correction |
|
DOI: |
https://doi.org/10.5281/zenodo.18666690 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
HIERARCHICAL SOFTWARE-DEFINED NETWORKING ARCHITECTURE FOR NEXT-GENERATION
MOBILITY MANAGEMENT |
|
Author: |
WONYONG YOON |
|
Abstract: |
Achieving Ultra-Reliable Low-Latency Communication (URLLC) in mobility scenarios
for 5G or next-generation mobile networks is challenged by the dense deployment
of small cells and the high frequency of handovers. Traditional 3GPP mobility
protocols and existing centralized Software-Defined Networking (SDN)
architectures suffer from signaling overhead and handover delay due to relying
on the top-down orchestration of forwarding path changes. To overcome this
fundamental limitation, we propose an architecture of hierarchical controllers
that utilize a novel bottom-up orchestration strategy for mobility support. In
this bottom-up orchestration, geographically located hierarchical controllers
collocated with base stations can trigger handover execution, allowing target
base stations to push user plane forwarding rules up the hierarchy. Through
numerical analyses, we validate that our proposed hierarchical controller
architecture and bottom-up orchestration reduces network bandwidth consumption
for handover signaling by 55% and handover delay by 47% compared to the
traditional tunneling-based mobility management method. The results support that
the proposed architecture and orchestration mechanism can be a viable solution
to provide efficient mobility support in 5G or next-generation dense cellular
networks. |
|
Keywords: |
Software-Defined Networking, 5G Network, Mobility, Hierarchical Controller,
Bottom-Up Orchestration. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666705 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
OPTIMISED FINGERPRINT FEATURE REPRESENTATION FOR RELIABLE IDENTIFICATION |
|
Author: |
JAINY JACOB M. , D. SHANMUGAPRIYA |
|
Abstract: |
The fingerprint recognition has become a widely used procedure of biometric
authentication but its functionality is typically affected by noise,
mis-alignment and partial impression of the fingerprint making the existing
feature extraction and classification scheme ineffective. In an attempt to
address these issues, the research shall recommend an adaptive model of
fingerprint recognition that shall incorporate the application of enhanced
feature extraction and effective classification methods. Specifically, it
introduces the proposed algorithm Selective Spatio-Temporal Residual Feature
Framework (STRFF-Net) of discriminative spatio-temporal features extraction and
Temporal Residual Flow Recognition Network (TRFRN) of the accurate fingerprint
identification. STRFF-Net uses the residual flow modeling and attention to
highlight salient ridge and minutiae patterns and discard the irrelevant or
noisy regions to provide enriched feature representation. TRFRN uses the stream
of attention-based classification model to investigate the spatial and temporal
changes patterns of the fingerprints that enables reliable recognition of the
prints of fingers in the event of smudged and partial fingerprints. Comparison
of the experimental results with publicly available fingerprint data shows that
experimental analysis with reference to feature improvement, strength of the
activation as well as accuracy of classification is highly improved when
compared to existing and machine-based learning techniques. The robustness and
precision of the suggested framework makes it a suitable instrument in the
realistic application of biometric authentication, forensic identification or
application in areas of serious security concern. |
|
Keywords: |
Attention Mechanism, Biometric Authentication, Fingerprint Recognition, Residual
Flow, STRFF-Net, TRFRN. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666729 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
DECISION SUPPORT FRAMEWORK FOR EARLY PREDICTION OF DIABETIC PATIENT READMISSION
USING HYBRID METAHEURISTICS |
|
Author: |
P. VENKATA KISHAN RAO , AARTI , A. SURESH RAO |
|
Abstract: |
Readmissions to hospitals are a significant health issue for people with
diabetes and waste valuable hospital healthcare resources. The proposed model in
this research, is a hybrid one combining feature selection using GRA and
optimizing through PSO while tuning their hyperparameters by GWO. The most
important predictors of risk of readmission from demographic, clinical and
laboratory parameters were pre-processed and analyzed with the UCI Diabetic
Readmission Database. GRA and PSO brought about good selection of informative
features, and GWO tuned model parameters for base classifiers such as Decision
Tree (DT), Random Forest (RF), Gradient Boosting (GB), Logistic Regression (LR)
and k-Nearest Neighbours( KNN). The ensemble model reported here achieved the
highest accuracy of 98.6+% compared with decision tree (DT) 97.3% and gradient
boost (GB) of 97.6%. Other metrics – F1-score (98.5%), Kappa (0.987), AUC
(0.999) further confirmed its robustness. Stratifying patients into high,
moderate, low high risk enables earlier interventions and is a useful decision
support system in diabetic readmission reduction initiatives and delivery of
care. |
|
Keywords: |
Diabetic Readmission, Ensemble Learning, Grey Relational Analysis (GRA), Grey
Wolf Optimization (GWO), Particle Swarm Optimization (PSO). |
|
DOI: |
https://doi.org/10.5281/zenodo.18666740 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
MACHINE LEARNING BASED GCN-LSTM MODEL FOR CROP YIELD PREDICTION USING
SPATIAL-TEMPORAL FEATURE LEARNING |
|
Author: |
MAMTA KUMARI , SUMAN , DEVENDRA PRASAD |
|
Abstract: |
Prior research has identified limited data and minimal use of soil
characteristics as significant shortcomings. To address these issues, this
article conducted extensive data collection for bajra yield prediction and
introduces a novel Graph convolution neural network and long short-term memory
(GCN-LSTM) model, which consists of three main stages: data collection and
processing; spatial feature learning; and prediction. The model breaks the
limitations of the traditional methods by using data analytics based on IT and
advanced deep learning, making more accurate predictions that can be used in
smart agriculture, resource optimization, and food security. Unlike previously
studied, deep learning models such as recurrent neural networks (RNN), Long
Short-Term Memory (LSTM), and Convolutional neural networks (CNN), the proposed
GCN-LSTM model does not assume independence among the districts in the crop
yield prediction (CYP) data. Instead, it processes attributes related to crop
yield prediction such as meteorological, soil, and climate data. The spatial
feature learning is not neglected and is leveraged by the LSTM model for the
temporal prediction of crop yield. The performance of the GCN-LSTM model is
evaluated on RMSE, R², and correlational coefficients. The experimental results
demonstrate that the proposed model significantly outperforms conventional
models by effectively incorporating spatial information. |
|
Keywords: |
Graph Convolution Network, Bajra Crop, Soil Data, Spatial Feature Learning, And
Soil Characteristics. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666754 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
STATISTICAL FORECASTING BASED ON CONDITIONAL DENSITY ESTIMATION |
|
Author: |
RAVINDRA CHANGALA, K.KIRAN KUMAR, S. RENU DEEPTI, V. PREETHI, G. SUSHMA KIRAN
KUMAR KAVETI, NATHA DEEPTHI |
|
Abstract: |
Many real problems such as stock market prediction, weather forecasting etc has
inherent randomness associated with them. Adopting a probabilistic framework for
prediction can accommodate this uncertain relationship between past and future.
Typically the interest is in the conditional probability density of the random
variable involved. One approach for prediction is with time series and auto
regression models. In this work, liner prediction method and approach for
calculation of prediction coefficient are given and probability of error for
different estimators is calculated. The existing techniques all require in some
respect estimating a parameter of some assumed solution. So, an alternative
approach is proposed. The alternative approach is to estimate the conditional
density of the random variable involved. The approach proposed in this thesis
involves estimating the (discretized) conditional density using a Markovian
formulation when two random variables are statistically dependent, knowing the
value of one of them lets us get a better estimate of the value of the other
one. The conditional density is estimated as the ratio of the two dimensional
joint density to the one-dimensional density of random variable whenever the
later is positive. Markov models are used in the problems of making a sequence
of decisions and problem that have an inherent temporality that is consisting of
a process that unfolds in time in time. In the continuous time Markov chain
models the time intervals between two consecutive transitions may also be a
continuous random variable. The Markovian approach is particularly simple and
fast for almost all classes of classes of problems requiring the estimation of
conditional densities. |
|
Keywords: |
Statistical Prediction, Unbiased Ness, Sufficiency, Smoothing, Univariate Time
Series, Autoregressive, Markov Chains. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666768 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
VESSEL LOOP STRUCTURE-AWARE MULTI-GRADE DIABETIC RETINOPATHY CLASSIFICATION
USING ENHANCED TCLLKEN |
|
Author: |
JAGADEESH VUTLA , VISALAKSHI ANNEPU |
|
Abstract: |
In order to prevent individuals from blindness and blurred vision, timely and
accurate Diabetic Retinopathy (DR) diagnosis is essential in the context of the
medical field. Therefore, numerous studies have been implemented by a variety of
researchers to classify DR using Artificial Intelligence (AI) algorithms and
Retinal Fundus Images (RFIs). The fundamental biomarker for recognizing the DR
is the vessel loop. However, the conventional works failed to concentrate on
investigating the Vessel Loop Structure (VLS) during DR classification, thereby
causing poor DR grading. Thus, to upgrade the diagnosis reliability, a
significant VLS-aware multi-grade DR classification framework is pivotal.
Therefore, This work is driven by the need to develop a more accurate, VLS-aware
multi-grade DR classification method using Transfer Complementary Log-Log
Karplus-EfficientNet (TCLLKEN). Primarily, the eye-retinal images are gathered
and further pre-processed, followed by gray-scale conversion. After that,
regions in the gray-scale images are grouped. In the meantime, from the
pre-processed images, the Green Channel (GC) is extracted. Subsequently, vessel
structure segmentation, vessel graph construction, and VLS extraction are
carried out. Lastly, the extracted features, extracted VLS, and pre-processed
images are inputted into the proposed TCLLKEN that classifies the multi-grades
of the DR efficiently. Overall, through the inclusion of VLS analysis, DR
grading accuracy is significantly enhanced across all stages. Hence, highly
accurate multi-grade DR classification is achieved by the proposed VLS-aware
TCLLKEN model with an accuracy of 98.99%, thus performing better than existing
methods. |
|
Keywords: |
Diabetic Retinopathy (DR), Blood Vessels (BV), Vessel Loop Structure (VLS),
Retinal Lesions (RL), Deep Learning (DL), Proliferative Diabetic Retinopathy
(PDR), and Retinal Fundus Images (RFI). |
|
DOI: |
https://doi.org/10.5281/zenodo.18666781 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
LIGHTWEIGHT CNN-TRANSFORMER FUSION MODEL FOR AUTOMATED RIB FRACTURE LOCALIZATION
IN RADIOGRAPHS |
|
Author: |
NAREGALKAR AKSHAYKUMAR RANGNATH , ANE ASHOK BABU , NAGAMALLESWARA RAO PURIMETLA
, SHAIK SALMA BEGUM , DEEPAK V , S SINDHURA , N JAYA |
|
Abstract: |
This study proposes a Lightweight CNN–Transformer Fusion Model for the automated
detection of rib fractures in chest radiographs. Rib fractures represent one of
the most common trauma-related injuries; however, their identification through
conventional radiography remains challenging due to overlapping anatomical
structures and the subtle nature of fracture lines. The proposed hybrid
architecture integrates Convolutional Neural Networks (CNNs) for capturing
fine-grained local features with Transformer encoders to model long-range global
dependencies, thereby enhancing interpretability and diagnostic accuracy. A
dataset comprising 6,200 chest radiographs was utilized to evaluate the model’s
performance against standard benchmarks. Experimental results demonstrated
superior performance, achieving a precision of 94.6%, sensitivity of 92.3%,
specificity of 95.7%, and an F1-score of 91.9%, outperforming CNN-only,
Transformer-only, and CT-based baseline models. Furthermore, the lightweight
architecture enabled real-time inference at 32 frames per second, ensuring
suitability for deployment in resource-constrained healthcare environments.
Overall, the findings highlight the potential of hybrid deep learning frameworks
to significantly enhance clinical decision-making by providing accurate,
efficient, and interpretable rib fracture detection in chest radiography. |
|
Keywords: |
Rib Fracture Detection, CNN–Transformer Fusion, Medical Image Analysis, Chest
Radiographs, Deep Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18666797 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
MULTI MODAL AI FOR IMPROVING ACCESSIBILITY THROUGH SPEECH AND GESTURE
RECOGNITION FOR THE DISABLED |
|
Author: |
RAVINDRA CHANGALA , Dr. K PAVAN KUMAR, Dr. S ARUNA, P MADHAVI, N SRINIVASA RAO,
CH LAVANYA SUSANNA, Dr. C SUGANYA |
|
Abstract: |
In this paper, we report on designing and evaluating a new multimodal AI that
multi-modally combines speech and gesture recognition to enhance accessibility
for people with disabilities. We aim to create a flexible and resilient
interface to allow individuals with various disabilities to interact with
technology using one or two input modalities. The model comprises the Deep
Learning DNN models for speech (LSTM-based) and gestures (CNN-based) and the
multi-attention fusion model to combine both modality outputs. To that end, a
proprietary speech dataset of people with speech disabilities and gesture videos
of people with motor disabilities was used to train and test the system. The
experimental results demonstrate that our multimodal model outperforms a
speech-only and a gesture-only baseline system in terms of global F1-score
(0.90) and reduces the task time duration by 28% compared to the baseline
systems (Google Assistant and Microsoft Kinect). The acoustics-based system was
also robust to noise, performing with 82% accuracy at high noise. The system
accessibility, ease of use, and performance were very high. This paper shows how
multimodal AI can contribute towards building inclusive, user-friendly
technologies for people with disabilities. It is a good step for achieving
greater real-life usability. |
|
Keywords: |
Multi-modal AI, Speech Recognition, Gesture Recognition, Accessibility, Deep
Learning, Assistive Technology |
|
DOI: |
https://doi.org/10.5281/zenodo.18666805 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCING CONTENT RETRIEVAL WITH BIG DATA AND NATURAL LANGUAGE PROCESSING FOR
SCALABLE AND SEMANTIC SEARCH SYSTEMS |
|
Author: |
S SUJANTHI , Dr. MULUMUDI SUNEETHA , NARASIMHA RAO THOTA , N SRIHARI RAO ,
CHITNEEDI KASI VISWANADHAM , Dr. B HEMANTHA KUMAR , P S V S SRIDHAR |
|
Abstract: |
Content search and retrieval systems are required to be more efficient due to
the data's high volume and complexity. This paper presents a new way to combine
Big Data techniques with high-end Natural Language Processing (NLP) models to
improve the search procedure's accuracy, relevance, and scalability. We aim
to build a system that effectively uses distributed Big Data infrastructure for
data processing and cutting-edge NLP models for semantic query interpretation.
We evaluate the system over three datasets: Common Crawl (web content), Medical
Text Mining, and Amazon Product Reviews, and compare to traditional
keyword-based search and TF‐IDF and Word2Vec‐based approaches. The
experimental results show that our system achieves better precision, recall,
F1-score, and Mean Average Precision (MAP) than previous works at a reasonable
query response time. The combination of Big Data and NLP results was much more
relevant and contextually aware. This work is a big step toward better
content search in many application domains; it makes more accurate and efficient
retrieval possible and proposes a personal search experience. The proposed
integration of Big Data infrastructure with advanced NLP models enables scalable
and semantically rich retrieval, addressing key limitations of existing
keyword-centric and shallow semantic search systems. |
|
Keywords: |
Big Data, Natural Language Processing, Content Search, Semantic Search,
Precision, Information Retrieval |
|
DOI: |
https://doi.org/10.5281/zenodo.18666811 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCING AUTOMATED CODE GENERATION WITH TRANSFORMER MODELS AND REINFORCEMENT
LEARNING: A DEEP LEARNING APPROACH TO SOFTWARE DEVELOPMENT |
|
Author: |
T PRAVEEN KUMAR , ANNAPURNA GUMMADI , SYED NAFEES AHAMED , N SRIHARI RAO ,
CHITNEEDI KASI VISWANADHAM , DEVAKI K , P S V S SRIDHAR |
|
Abstract: |
Automated programming using deep learning can shorten the code development
process and ensure the software built is of high quality. In this paper, we
investigate the combination of transformer models and reinforcement learning
(RL) for the automatic generation of code. The aim is to design a model that
produces correct, consistent, and valuable code in different programming
languages. Our test utilized 5 million pieces of code from several open-source
repositories, and models were evaluated based on whether their output was
grammatically correct, code quality, code execution accuracy, and code
production speed. Our model outperforms conventional LSTM-based approaches and
GPT-2, achieving excellent syntactic correctness and execution accuracy, the
highest code quality marks (8.5/10), and completing tasks in less time. The
findings demonstrate that combining deep learning and RL enables the creation of
top-quality code efficiently. By applying AI to software development, this work
finds that both speed and reliability noticeably improve, which is beneficial
for all parties involved and the broader industry. Despite the success of deep
learning in natural language processing, automated code generation continues to
face challenges related to execution correctness, code quality, and scalability
across programming languages. Experimental results demonstrated that the
proposed transformer–reinforcement learning framework achieved higher syntactic
correctness, execution accuracy, and reduced generation time compared to
existing LSTM-based and transformer-only models, indicating its suitability for
real-world software development tasks. |
|
Keywords: |
Automated Code Generation, Deep Learning, Reinforcement Learning, Transformer
Models, Code Quality, Software Development |
|
DOI: |
https://doi.org/10.5281/zenodo.18666822 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCING AUTONOMOUS DRONE DELIVERY SYSTEMS: A HYBRID CNN-LSTM APPROACH FOR
ROBUST OBJECT TRACKING IN DYNAMIC ENVIRONMENTS |
|
Author: |
ANNAPURNA GUMMADI , RAVINDRA CHANGALA , PULLAIAH PINNIKA, B SRIVANI,
VIJAYAKUMARI RODDA, Dr. N NEELIMA, R KIRTHIGA |
|
Abstract: |
Strong object tracking is needed in autonomous drone delivery to manage changing
surroundings appropriately. This research introduces an innovative method that
uses CNNs to extract features and LSTM networks to predict the sequence in which
objects appear in videos. The primary goal is to enhance the tracking system's
reliability while objects move and are hidden and when the environment is
unpredictable. We measured the precision, success rate, real-time FPS, and
occlusion handling of the method with the help of the MOT Challenge and our
custom drone flight dataset. Experiments found that the Proposed Model is more
accurate than SiamFC, DeepSORT, and CNN + Kalman Filter, with a precision of 94%
and a success rate of 92%, and can operate at 30 FPS in live tests. The model is
trusted for uncrewed drone operations because it can overcome occlusions and
accurately restore missing parts. This approach improves how autonomous drones
navigate, making them useful for logistics, keeping watch, and handling
emergencies. The primary contribution of this work lies in the development of a
hybrid CNN–LSTM architecture integrated with sensor fusion for robust real-time
object tracking. The proposed approach advances existing solutions by improving
tracking accuracy, occlusion recovery, and real-time performance in dynamic
drone delivery environments. |
|
Keywords: |
Autonomous Drones, Object Tracking, Deep Learning, Convolutional Neural Networks
(CNNs), Long Short-Term Memory (LSTM), Sensor Fusion |
|
DOI: |
https://doi.org/10.5281/zenodo.18666829 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ANALYSIS ACCURACY OF CLASSIFICATION FOR MULTI-EVIDENCE DATASET CONTAINING
DISCREATE AND CONTINUOUS VALUE USING CATEGORICALNB AND GAUSSIANNB |
|
Author: |
DR. VIJAY KUMAR VERMA |
|
Abstract: |
Machine Learning is one of the emerging fields in computer science, and Bayes’
theorem can be applied to predict class labels precisely and accurately. Naïve
Bayes classification, specifically the Categorical Naïve Bayes (CategoricalNB or
CNB) classifier, is a simple probabilistic model based on Bayes’ theorem and is
highly effective in reducing computational cost, as conditional probabilities
can be easily estimated from data. In this paper, we employ both Categorical
Naïve Bayes (CNB) and Gaussian Naïve Bayes (GNB) classifiers. The CNB classifier
is widely used for binary and multi-class classification problems in machine
learning and performs well when the attribute values are categorical or
discrete. However, it has certain limitations when handling continuous data. In
real-world applications, datasets often contain attributes in mixed formats,
including both discrete and continuous values. For continuous attributes, CNB
requires discretization, whereas GNB can be directly applied to continuous data
by assuming a Gaussian distribution. In this study, we use CNB and GNB to
classify a dataset containing both discrete and continuous attributes and
compare their classification accuracy. A real-life dataset obtained from the
HDFC Bank home loan department was used for experimentation. The dataset
consists of 6,000 records with 19 attributes. Experimental results show that the
performance of the Gaussian Naïve Bayes classifier is superior to that of the
Categorical Naïve Bayes classifier. However, a limitation of GNB is its
assumption that continuous attributes follow a normal distribution. |
|
Keywords: |
Classification, Machine learning, Naive Bays, Continues, Discreate, Gaussian
Naïve, Accuracy |
|
DOI: |
https://doi.org/10.5281/zenodo.18666839 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
ARTIFICIAL INTELLIGENCE-POWERED LANDSCAPE VISION TRANSFORMATION TECHNIQUE FOR
EARLY FLOOD RISK DETECTION THROUGH SATELLITE MAPPING SYSTEM USING DEEP FEATURE
ENGINEERING |
|
Author: |
Dr.B.SHARMILA , Dr.A.V.SANTHOSH BABU |
|
Abstract: |
Environmental development depends entirely on natural resources, such as
covering landscapes based on human lives and technological improvement.
Specifically, the sloping mountain regions of land and their nature depend on
sliding partitions; due to large floods, human lives are affected mainly by
sudden causes of disasters. This paper addresses the critical issue of early
flood risk detection in sloping mountain regions, where human lives and
technological infrastructure are heavily reliant on natural resources. Satellite
images and AI techniques are crucial developments to make analysis effective.
Still, traditional methods fail to analyze the feature mapping and presence in
feature variability due to increasing time frames, resulting in a high recall,
which means that fables are unable to detect the flood quantity from regions
where rainfall occurred, regions, and landscape regions. To resolve this
problem, we propose an AI-powered landscape Vision Transformation Technique
(LVTT) For Early Flood Risk Detection through a Satellite Mapping System Using
Deep Feature Engineering. Initially, the landscape region images are collected
from the satellite monitoring system. An adaptive Gaussian filter carries out
the preprocessing. The historic color difference-based Slice Window Canny Edge
Detection (SWCES) is used to segment the two-layer flooding and water flow
region with dependable features. Then, Region Scaled Ant Colony Optimization
(RSACO) for selecting the variability region features mutually depends on
detecting the flooding variation region. Finally, the Hyper Capsuled LSTM Gated
Convolution Neural Network (LSTMG-CNN) predicts that the flood region will cover
the risk region of landscapes for early detection of floods to safeguard human
lives. The proposed system produces high performance by predicting in the early
stage of flooding variation limits depending on the flood flow through the
image, progressing to achieve a high precision rate and recall rate with
improved detection accuracy than any conventional methods. |
|
Keywords: |
Risk Detection, Satellite Imagery, Deep Learning, Feature Engineering, Landscape
Analysis, Artificial Intelligence, Convolutional Neural Networks, LSTM. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666925 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
BLOCKCHAIN BASED TRUST AWARE FRAMEWORK FOR SECURE AND TRANSPARENT CONVENTIONAL
ACADEMIC RESULT PUBLISHING SYSTEM |
|
Author: |
SHEELA D V , DR.ASHOK KUMAR T A |
|
Abstract: |
In the evolving landscape of educational assessment and result publishing,
blockchain technology holds a transformative approach for managing the
examination results. The traditional approach for publishing results possesses
various challenges including data security vulnerabilities, delayed result
distribution, lack of transparency, administrative inefficiencies and so on.
Despit recent efforts using online examination platforms and blockchain based
grade storage, existing solutions typically do not integrate decentralized
tamper proof storage, automated access control and dynamic trust modeling within
a single architecture, leaving result data exposed to manipulation, manual entry
errors, weak auditability and limited interoperability. This research presents a
novel framework for secure and efficient result publishing BTRP (Blockchain
based Trust aware Result Publishing). Initially, the proposed model integrates
blockchain technology with an efficient distributed database, utilizing Smart
Contracts (SC) to ensure secure data storage and sharing. Secondly, we
incorporate a trust-aware feedback mechanism into result publishing, allowing
participants to receive feedback based on their behaviour. This feedback is
dynamically updated on the blockchain through smart contracts, ensuring that
only honest and authorized participants have opportunities for data sharing.
Additionally, advanced cryptographic mechanisms are incorporated for feedback
submission, which protects the privacy of participant information. Front end
architecture of BTRP comprises Student, Examiner and Admin as the participants
for result publishing; also authorized login dashboard is design for Examiner
and admin participants for updation of results. Comparative analysis highlight
BTRP’s superiority by addressing known limitations of existing e-learning and
blockchain academic systems, such as manual data entry vulnerabilities, privacy
concern and access control. |
|
Keywords: |
Result Data, Blockchain, Consensus, Smart Contract, Access Control |
|
DOI: |
https://doi.org/10.5281/zenodo.18666937 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
AL-BIRUNI EARTH RADIUS SPOTTED HYENA OPTIMIZER BASED TRANSFER LEARNING FOR BRAIN
TUMOR CLASSIFICATION USING MRI |
|
Author: |
SRILAKSHMI ALURI , S. SAGAR IMAMBI |
|
Abstract: |
Brain tumor classification involves identifying and categorizing various types
of tumors from Magnetic Resonance Imaging (MRI), a task critical for effective
diagnosis and treatment planning. Existing methods often face challenges such as
MRI noise, imprecise tumor localization, limited feature extraction, and high
computational requirements, which can reduce classification accuracy and
reliability. To address these challenges, an innovative technique named
Al-Biruni Earth Radius Spotted Hyena Optimizer based Convolution Neural Network
with Transfer Learning (BERSHO_CNN with TL) is proposed for classifying brain
tumor. Initially, the pre-processing is carried out on MRI images using
Non-Local Means (NLM) filtering. Tumor segmentation is then performed using the
Multi-Attention Dense Network (MAD-Net). By utilizing the segmented images,
feature extraction is conducted through a combination of 3D-Convolutional
Autoencoder+Vision Transformer (3D-CAE+ViT). Finally, the brain tumors are
classified using a proposed CNN with TL) model, which is trained using the
BERSHO algorithm. BERSHO is developed by combining the Al-Biruni Earth Radius
(BER) and the Spotted Hyena Optimizer (SHO). The proposed approach demonstrates
enhanced classification accuracy, robust tumor localization, and improved
feature representation compared to existing methods. The devised model has
achieved a Negative Predictive Value (NPV) of 91.788%, accuracy of 92.847%, True
Positive Rate (TPR) of 92.534%, Positive Predictive Value (PPV) of 92.277%, and
True Negative Rate (TNR) of 91.747%. These findings indicate that the proposed
framework provides a practical and efficient tool for automated brain tumor
diagnosis, contributing new knowledge in integrating hybrid optimization with
advanced feature extraction for improved MRI-based tumor classification. |
|
Keywords: |
Brain tumor classification, Magnetic Resonance Imaging, Al-Biruni Earth Radius,
Spotted Hyena Optimizer, GoogLeNet. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666947 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
TST-YOLO: TOKEN-SELECTIVE TWIN-ENHANCED YOLO FOR ROBUST UNDERWATER FISH
DETECTION AND PHENOTYPE ESTIMATION ON AUVS |
|
Author: |
SIRIGINEEDI MANIKANTA , Dr. RAGHVENDRA KUMAR , Dr. R N V JAGAN MOHAN |
|
Abstract: |
Underwater fish detection remains difficult due to turbidity, color cast, motion
blur, and the prevalence of small, fast, look-alike targets on embedded AUV
hardware. We present TST-YOLO, a compact detector that combines four synergistic
components: (i) a physics-aware digital-twin synthesizer that exposes the model
to realistic water-optics shifts during training; (ii) a lightweight
pre-enhancement fusion stage that mitigates color/contrast bias before
inference; (iii) a token-selective transformer head that prunes and merges
low-information tokens for AUV-grade efficiency; and (iv) a phenotype-guided
auxiliary head that estimates lateral-line scale counts to regularize features
toward biologically meaningful structure. Evaluated under identical training
budgets on DeepFish and DePondFi’23, TST-YOLO improves mAP@50–95 by +6.1 and
+6.0 points, respectively, over strong YOLOv8/YOLOv7 baselines, with +5.5 APS_SS
gains on small targets. Confidence calibration also improves (ECE ↓34%, Brier
↓), while the token-selective head reduces transformer tokens by ≈30% at equal
or better accuracy, cutting end-to-end Jetson latency by ~6%. Results are
reported as 5× runs (mean±SD) with bootstrap confidence intervals, paired tests,
and McNemar analyses. Beyond accuracy gains, the contribution is positioned at
the systems level, demonstrating how domain-aware data synthesis,
reliability-oriented evaluation, and token-efficient transformer design can be
jointly integrated for trustworthy, resource-constrained intelligent perception.
This emphasis on efficiency, robustness, and statistical rigor highlights the
relevance of TST-YOLO as an information-technology solution for dependable
autonomous sensing rather than a task-specific detector alone. |
|
Keywords: |
Underwater Object Detection; Autonomous Underwater Vehicles (Auvs); Digital-Twin
Augmentation; Token-Selective Transformer; Underwater Image Enhancement;
Small-Object Detection; Phenotype (Lateral-Line) Estimation. |
|
DOI: |
https://doi.org/10.5281/zenodo.18666965 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
Title: |
STYLISTICALLY-AWARE HINDI-ENGLISH POETIC TRANSLATION WITH MBART AND LLM-BASED
POST-EDITING |
|
Author: |
PRAGYA TEWARI , ANURAG SINGH BAGHEL |
|
Abstract: |
Translating poetry across languages poses significant challenges, particularly
for low-resource pairs like Hindi-English, where linguistic, cultural, and
stylistic gaps are substantial. This paper introduces a domain-adapted machine
translation framework designed to preserve the poetic essence like metaphor,
emotion, tone, and rhythm of Hindi poetry in English translation. We fine-tune
mBART50, a multilingual sequence-to-sequence model, on a curated parallel corpus
of Hindi-English poetic pairs, augmented with stylistic tags. We also explore
optional post-editing using large language models (LLMs) to enhance fluency and
poetic expressiveness. Our approach outperforms standard translation systems, as
demonstrated by both automatic metrics and human evaluations. To the best of our
knowledge, this is the first systematic study of Hindi–English poetry
translation that combines (i) the construction of a novel annotated parallel
corpus, (ii) style-aware fine-tuning of mBART50, and (iii) LLM-based
post-editing for poetic refinement. This work represents a step toward building
translation systems that capture not just meaning, but the creative and
emotional depth of poetry especially in low-resource language settings. |
|
Keywords: |
Neural Machine Translation, Hindi Poems, Large Language Models, Fine Tuning, Low
Resource Language |
|
DOI: |
https://doi.org/10.5281/zenodo.18667066 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2026 -- Vol. 104. No. 3-- 2026 |
|
Full
Text |
|
|
|