|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
October 2025 | Vol. 103 No.19 |
|
Title: |
AN IMPLEMENTATION OF ENHANCED INCEPTION-RESIDUAL CONVOLUTIONAL NEURAL NETWORK IN
LUNG CANCER PREDICTION |
|
Author: |
Dr.S.PANDIKUMAR, Dr.S.PREMKUMAR, Dr.T.GUHAN, T.MARGARET MARY, Dr.M.KANNAN,
Dr.V.PREAM SUDHA |
|
Abstract: |
Recent developments in Deep Learning are assisting in pattern recognition,
classification, and quantification. Additionally, DL offers optimal performance
across all industries, particularly in the medical field. For diseases to be
treated and for humans to survive, precise diagnosis is essential. The primary
goal of this study is to develop a useful tool for Computer Aided Diagnosis
(CAD) systems to identify a suspicious lung nodule for cancer early detection
automatically. Deep Learning is used to carry out this task. Using a filter,
image acquisition and de-noising have been done at the first step. Segmentation
and feature extraction have been accomplished in stage two. Finally,
classification is accomplished using CNN methods in Deep Learning dubbed
Enhanced Inception of Residual Convolutional Neural Network. Performance
Evaluation metrics like accuracy, precision, recall, and f-measure are examined,
and the results are also optimised. This method is contrasted with ML-E-RF and
LR module 1, and it is ultimately determined that DL has a quick and precise
diagnostic system. |
|
Keywords: |
CT-scan, CNN, Adaptive Wiener Filter, MV-CNN, EIRCNN |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
INTEGRATING COMPETENCY QUESTION-DRIVEN ONTOLOGY AUTHORING WITH ONTOLOGY
VALIDATION FOR LEARNING ANALYTICS FROM HETEROGENEOUS PLATFORMS |
|
Author: |
SYAHMIE SHABARUDIN, SAZILAH SALAM, IBRAHIM AHMAD, MOHD HAFIZAN MUSA, NURFADHLINA
MOHD SHAREF, DALBIR SINGH, WENDY HALL, ROHANA MAHMUD |
|
Abstract: |
Ontology authoring for learning analytics from heterogeneous platforms is a
complex task, particularly for authors who may lack proficiency in logic. This
paper introduces a novel approach that leverages competency questions (CQs) and
test-driven development principles to streamline ontology, authoring and
validation. We analyse common questions from stakeholders at 13 public
universities to create competency questions, identify patterns, and utilise
linguistic presuppositions to define ontology requirements. Our methodology
ensures that these requirements are testable and can be validated, facilitating
an integrated ontology for learning analytics. Additionally, we present a
detailed ontology validation report, demonstrating the effectiveness of our
approach through consistency checks, property validations, and individual test
cases. This integrated method aims to enhance the accuracy and reliability of
ontologies in representing learning analytics data from diverse platforms. |
|
Keywords: |
Competency Questions, Ontology Validation, Learning Analytics, Heterogeneous
Platforms, Test-Driven Ontology Development |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING AIOT WITH COMMUNICATION-EFFICIENT FEDERATED LEARNING: A
BLOCKCHAIN-ENABLED APPROACH FOR GREEN AND SECURE IOT SYSTEMS |
|
Author: |
MUNDLAGIRI PRAVEEN KUMAR AND, DR. C. DASTAGIRAIAH |
|
Abstract: |
The integration of Artificial Intelligence of Things (AIoT) with Federated
Learning (FL) provides transformative capabilities for distributed intelligent
systems. However, challenges such as excessive communication overhead, energy
inefficiency, and security vulnerabilities limit the scalability and
sustainability of AIoT deployments. This research proposes an innovative
framework combining communication-efficient Federated Learning with
blockchain-supported secure aggregation. The approach integrates gradient
quantization, sketching techniques, and periodic averaging with lightweight
blockchain consensus algorithms. Large-scale simulation experiments on benchmark
datasets demonstrated up to 62% bandwidth savings, 55% reduction in
communication rounds, 40% decrease in energy consumption, and improved model
accuracy compared to existing FL approaches. The framework successfully enables
green, secure, and scalable AIoT systems while conforming to sustainable AI
principles and ensuring resilient collaborative learning in resource-constrained
edge environments. |
|
Keywords: |
Artificial Intelligence of Things (AIoT), Federated Learning (FL), Blockchain
Technology, Communication-Efficient Federated Learning, Green IoT. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
IMPROVING THE SECURITY OF THE IOT UTILIZING THE APPLICATION OF ROBUST
CRYPTOGRAPHIC ALGORITHMS |
|
Author: |
DR. ANIMESH SRIVASTAVA, DR P PRABAKARAN, PANJAGARI KAVITHA, DR KARAKA
RAMAKRISHNA REDDY, DR. PARMOD KUMAR, DR.S.SUMA CHRISTAL MARY |
|
Abstract: |
Securing Internet of Things (IoT) communications in real time has become a huge
challenge, as these systems are widespread and vulnerable to cyber and quantum
threats. Conventional cryptography techniques can place excessive computational
penalties in place or not provide countermeasures to threats that are emerging
in the post quantum scenario. Utilising algorithms such as, the post-quantum
CRYSTALS-Dilithium technique for authentication, Advanced Encryption Standard
(AES) for lightweight symmetric encryption, and Elliptic Curve Cryptography
(ECC) for effective key exchange, this study offers a hybrid cryptographic
framework. The solution is designed for IoT devices with limited resources that
operate in real-time settings. To evaluate the model under several attack
vectors, such as brute-force, MITM, and side-channel assaults, a
simulation-based assessment was carried out using Contiki-NG and Cooja, together
with real-world sensor datasets. Results indicate that the framework achieves
over 95% detection accuracy, encryption latency under 1 ms, and energy usage
below 5 mW per node, while maintaining low memory consumption. These findings
show that post-quantum secure communication methods can be implemented in a
scalable manner for constrained situations. The proposed solution provides an
empirical trade-off between the performance and the cryptographic power skills,
and develops a suitable candidate in next-generation management of IoT security
that needs to perform and operate within the bounds of both traditional and
quantum-era risks. |
|
Keywords: |
IoT Security, Cryptography, Lightweight Encryption, Post-Quantum Cryptography,
Hybrid Algorithms |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING MALWARE DETECTION USING HYBRID FEATURE SELECTION TECHNIQUES IN
PREDICTIVE MODELS |
|
Author: |
GAYATHRI DEVI N, V. KRISHNA, NEELIMA GURRAPU, RAJESH BANALA, SHAIK JILANI BASHA,
KOMATI SATHISH |
|
Abstract: |
Malware detection remains a critical challenge in cybersecurity due to the
growing complexity and diversity of malicious threats. Existing literature
largely focuses on either filter-based or wrapper-based feature selection
methods, often limited to specific malware categories or datasets. This study
addresses this gap by introducing a hybrid feature selection approach that
integrates statistical filter metrics with model-specific wrapper refinement,
aiming to optimize feature subsets for enhanced malware detection. Publicly
available datasets, including the Microsoft Malware Classification Challenge
dataset and the Kaggle Malware Dataset, were used to evaluate multiple models
such as Gradient Boosting and Neural Networks. Experimental results show that
the hybrid approach consistently outperforms individual methods in terms of
accuracy, recall, and computational efficiency, achieving up to 95% accuracy and
96% ROC-AUC. The findings contribute new knowledge by presenting a
generalizable, scalable framework that improves malware detection performance
across heterogeneous features and datasets. |
|
Keywords: |
Malware Detection, Hybrid Feature Selection, Machine Learning, Filter And
Wrapper Techniques, Cyber Security |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN AND IMPLEMENTATION OF ASSISTIVE TECHNOLOGIES UTILIZING BRAIN-COMPUTER
INTERFACES |
|
Author: |
DR. V. V. R. MAHESWARA RAO , PRIYANKA R. RAVAL ,DR.M.V.RAJESH , DR. NIDHI MISHRA
, DR GANTA JACOB VICTOR , DR. G. N. R. PRASAD |
|
Abstract: |
Current technologies using BCI-assistance tend to be confined within simulation
settings or are not very usable or responsive to allow real-time practical
usages of them. This paper fills this gap by introducing a non-invasive
brain-computer interface (BCI) system based on the electroencephalogram (EEG)
that allows individuals with motor deficits to operate the wheelchair, as well
as smart home appliances, using motor imagery-based signals. The machine employs
the Emotive EPOC++ + (14-channel) electroencephalogram signal collection
headset. Preprocessing consists of bandpass filtering (8-30 Hz), 50 Hz notch
filtering, and the correction of artefacts with the help of the Independent
Component Analysis (ICA). Power Spectral Density (PSD) and Common Spatial
Pattern (CSP) feature extraction, and a Convolutional Neural Network (CNN) as an
implementation on PyTorch are used when classifying these features. The UART
protocol is used to send the classified mental commands to an Arduino
microcontroller that triggers the devices used. The system was proven to have
large accuracy in classification and small latency, only proving the
effectiveness of the system in real-time operations. Usability tests with
participants yielded highly positive feedback on comfort, ease of use, and
minimal training required. The novelty of this work lies in the fact that the
EEG classification based on the deep neural network is incorporated into the
functional assistive hardware integrated within the natural environment and can
be deployed. The study adds a validated, real-time, end-to-end BCI system that
links intent recognition through EEG to physical device use with a higher level
of usability, performance, and practical feasibility than current
state-of-the-art solutions. |
|
Keywords: |
Assistive Technology, EEG, Motor Imagery, Neural Signal Processing,
Brain-Computer Interface |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
INTELLIGENT LAND SUITABILITY ANALYSIS UTILIZING MULTILAYER PERCEPTRON AND IOT
SENSORS |
|
Author: |
PRIYANKA R. RAVAL , DR R SHALINI. , NARESH E , MOAZZAM HAIDARI , DR.HARIKRUSHNA
GANTAYAT , HYMAVATHI THOTTATHYL |
|
Abstract: |
In contemporary agriculture, intelligent computational models are becoming
increasingly popular in analysing land suitability to achieve greater precision
and scale. The paper is a comparison of hybrid and advanced Multilayer
Perceptron (MLP) architectures with rich Internet of Things (IoT) sensor data on
a large, real-world data set (10,000+ samples, 50. The purpose is to rigorously
test the robustness, accuracy, and computational speed of both MLP-based IoT
systems in a practical agricultural environment. In high-dimensional,
information-rich settings, the advanced MLP coupled with advanced IoT sensors
(which comprises drone-acquired Normalised Difference Vegetation Index (NDVI))
achieves 92.4% accuracy, a 0.91 F1 score, and a 0.88 Matthews Correlation
Coefficient (MCC), surpassing the hybrid model. The Hybrid MLP + Hybrid IoT
Sensor, on the other hand, has a robustness score of 0.9 and operates well in
handling noisy circumstances, real-time inference, and quick deployment. Both
models facilitate practical and context-sensitive benchmarking so that the
appropriate system can be chosen by the stakeholders. The study contributes to
methodological practices in land suitability assessment and provides
recommendations to scale to additional crops, areas, and sensors to promote
data-informed and robust agricultural decision support. |
|
Keywords: |
Land suitability, Multilayer Perceptron, IoT sensors, Agriculture, Normalised
Difference Vegetation Index (NDVI) |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
TELUGU NLP CHALLENGES AND METHODS: A SURVEY OF FILTERING, STEMMING, AND
TRANSFORMER-BASED HATE SPEECH DETECTION |
|
Author: |
SANDEEP KUMAR MUDE , K YOGESWARA RAO |
|
Abstract: |
Telugu is a major Dravidian language with complex grammar, deep morphological
structures, and flexible syntax. These linguistic features, while rich and
expressive, pose significant challenges for natural language processing. This
paper surveys the current landscape of Telugu NLP and presents a hybrid approach
that integrates rule-based grammar modeling with machine learning techniques.
The focus is on building efficient systems for text categorization, clause
segmentation, grammar verification, and hate speech detection. One of the main
challenges addressed is clause segmentation in compound and complex Telugu
sentences. Due to implicit subjects and overlapping structures, traditional
parsing methods struggle. The proposed solution uses syntactic pattern
recognition and partial parsing based on subject-predicate matching, allowing
for efficient segmentation without full syntactic trees. The system handles
clauses with shared subjects and ensures syntactic agreement across dependent
and independent components using a POS-based verification model. In addition to
grammar checking, this study emphasizes the role of stemming in processing
inflection-heavy languages. Telugu words often contain layers of suffixes that
must be stripped to find the root. The paper compares linguistic rule-based
methods with data-driven approaches and finds that hybrid models—combining affix
rules with statistical frequency patterns—yield superior performance for
classification and retrieval tasks. Hate speech detection and text
classification are explored using multilingual transformer architectures. These
include mBERT, XLM-Roberta, IndicBERT, and MuRIL. The models are fine-tuned
using Telugu-specific datasets and adapted with regional tokenization
strategies. The paper highlights how models trained on regional corpora offer
more contextual understanding and semantic precision for detecting implicit or
culturally nuanced hate speech. This survey consolidates computational models,
linguistic frameworks, and evaluation benchmarks to demonstrate how rule-based
strategies can effectively complement machine learning. It encourages more
cross-lingual NLP research, especially for Indian languages lacking extensive
digital resources. By presenting a linguistically grounded yet scalable system,
the paper contributes both theoretical and practical tools for Telugu NLP
research and applications. |
|
Keywords: |
Telugu Nlp, Filtering, Stemmig, Transformer, Hate Speech |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
EVALUATING THE ROLE OF BODY BIASING IN ENHANCING DOMINO LOGIC CIRCUITS FOR
ADVANCED VLSI TECHNOLOGIES |
|
Author: |
Dr. SRINIVAS AMBALA , K. PURUSHOTHAM , VAKITI SREELATHA REDDY, J.MANGA ,
RAVINDRANADH JAMMALAMADUGU, KOMATIGUNTA NAGARAJU |
|
Abstract: |
Traditional complementary metal-oxide semiconductor (CMOS) technologies face
scaling limitations, which have driven the search for innovative circuit-level
optimisation methods to meet the rising demand for energy-efficient and
high-performance microprocessors. Body biasing in domino logic is one such
technique that holds promise for adjusting threshold voltage (Vth) and enhancing
delay versus power modes under variable operating conditions. High-speed
microprocessor datapath components utilise domino logic circuits, and this
research examines how forward and reverse body biasing approaches influence
them. The main aim is to investigate how body biasing optimises performance
metrics such as energy efficiency, leakage power, dynamic power, and delay,
while maintaining circuit reliability and noise immunity. Various body-biased
domino logic configurations were modelled and compared under different supply
voltages (VDD), operating frequencies, and process variations, using both HSPICE
simulations and MATLAB/Python-based data visualisation. The experimental setup
included logic units like adders, multipliers, and decoders. The comparison
shows that RBB effectively reduces leakage but slows performance, whereas FBB
significantly decreases delay at the cost of increased leakage power. A balanced
trade-off between delay and power consumption exists within a tunable voltage
window, which is scalable. Dynamic power consumption drops sharply at lower VDD
levels; however, latency increases, which FBB can mitigate. This dual modulation
enables adaptive energy control, making it suitable for modern microprocessors
where workload scaling is crucial. The results demonstrate that domino logic
design can effectively balance performance and efficiency in future technology
nodes with the integration of body bias control schemes, making it suitable for
low-power and high-speed VLSI applications. |
|
Keywords: |
Domino Logic Circuits, Microprocessor, Body Biasing, Dynamic Power Consumption,
VLSI Application |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
EMPLOYING QUERY PERFORMANCE IN NOSQL DATABASES FOR APPLICATIONS UTILIZING BIG
DATA |
|
Author: |
DR. RATNA RAJU MUKIRI, ASESH KUMAR TRIPATHY, DR.M.V.RAJESH, B RAMANA REDDY,
ELANGOVAN MUNIYANDY, DR.S.SUMA CHRISTAL MARY |
|
Abstract: |
Traditional relational databases face challenges with scalability, schema
flexibility, and real-time performance in the era of large data output from
various sources, including social media, IoT sensors, and user-generated
content. To get over this restriction, more and more people are turning to NoSQL
databases, which are ideal for big data because of their distributed
architectures and varied data models. The goal of this research is to examine
and improve query performance in four prominent NoSQL systems: document-based
MongoDB, column-family Cassandra, key-value Redis, and graph-based Neo4j under
different real-world workloads. Through the use of real-world (Twitter, Stack
Overflow, MovieLens) and artificial-intelligence (graph) datasets, a
Kubernetes-orchestrated testbed was set up to measure execution time,
throughput, latency, scalability, & resource utilization. Using instruments like
as YCSB and Apache JMeter, the technique included controlled trials with read,
write, update, aggregation, and difficult traversal queries. The results show
that Redis always has the lowest latency as well as maximum throughput since it
stores data in memory. This makes it perfect for real-time analytics. Cassandra
is well-suited for workloads with a high volume of writes, as it scales
efficiently for workloads with a lot of writes because it scales well. MongoDB,
on the other hand, is great for a wide range of query types since it has good
indexing. Neo4j is better at graph traversal jobs, but it has greater latency
when there is a lot of traffic at the same time. Mathematical models back up the
trade-off between execution time and throughput even further by revealing a
significant link between dataset size and query complexity. This research adds a
complete benchmarking methodology for comparing NoSQL systems that helps
developers choose and tweak databases depending on how they will be used. In the
context of processing massive amounts of data, it emphasises how important it is
to make sure that database structures meet the needs of certain applications. |
|
Keywords: |
NoSQL, Query Performance, Big Data, Database Optimization, MongoDB, Cassandra. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
CAPACITOR AND SUPER CAPACITOR RELIABILITY ANALYSIS UTILISING NEO-FUZZY NEURAL
LEARNING |
|
Author: |
DESIDI NARSIMHA REDDY , RAHUL SURYODAI , DR B.V.S.ACHARYULU , MYLAVARAPU KALYAN
RAM , PUPPALA RAMYA , B SWARNA |
|
Abstract: |
As energy storage components are employed more and more in mission-critical
applications such as smart grids, electric vehicles and aerospace systems,
supercapacitors' reliability has emerged as the main issue. Both traditional
physics-based models and deep learning (DL) techniques frequently fail in two
areas: adaptability and real-time usefulness. This is particularly true when
there are constraints on data and computational power. The Neo-Fuzzy Neural
Learning (NFNL) method is used in this study to analyse the reliability and
forecast the degradation of supercapacitors by utilising the interpretability
for fuzzy logic as well as the adaptive learning capabilities of neural
networks. Using time-series data, this study classifies supercapacitor health
states across lifecycle stages and accurately predicts Equivalent Series
Resistance (ESR) degradation. To train and validate the proposed NFNL model, a
real dataset containing ESR values collected for 1500 hours within the
accelerated ageing setting was used. The research employed both traditional
Weibull-based statistical models and state-of-the-art models like Long
Short-Term Memory (LSTM) and Support Vector Regression (SVR). The NFNL model
achieves a classification accuracy of 91.2% and a Mean Absolute Error (MAE) of
0.111, outperforming all baselines. Error distribution study confirms the
accuracy and stability of NFNL's ESR degradation curve, which closely matches
the actual data. The NFNL framework finds the ideal balance of precision,
readability and computing economy, making it perfect for condition tracking
systems that operate in real-time. A trustworthy and interpretable prediction
approach for supercapacitor health estimation is presented in this research,
which has far-reaching implications for predictive maintenance as well as
lifespan optimisation on embedded and resource-constrained systems. |
|
Keywords: |
Energy Storage Components, Supercapacitors, Equivalent Series Resistance, Fuzzy
Logic, Neural Networks. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING CHATBOT RESPONSES THROUGH CONTEXT-AWARE NATURAL LANGUAGE
UNDERSTANDING |
|
Author: |
DR. GUNDA SWATHI , ASHISH GUPTA , JATIN ARORA , LAVANYA KONGALA , DR.SHIRISHA
DESHPANDE, ELANGOVAN MUNIYANDY |
|
Abstract: |
The development of intelligent conversational agents has brought significant
improvements in user-machine interaction; however, most existing chatbots still
struggle with maintaining contextual coherence and delivering relevant responses
in multi-turn dialogues. In this research, a realisation of a context-aware
Natural Language Understanding (NLU) framework is proposed, which increases the
competence of chatbots in processing user intents and entity recognition by
increasing semantic awareness. The system maintains historical context and
guarantees continuity across conversation turns by merging pre-trained
transformer-based models like BERT and Sentence-BERT with a bespoke dialogue
context encoder. The architecture uses a combination of human-centric
assessments (appropriateness, contradiction rate) and automated metrics (BLEU-4,
ROUGE-L, contextual coherence score) to evaluate performance thoroughly.
Experimental results on benchmark datasets indicate that there is a great mark
of improvement of the proposed model with respect to conventional single-turn
NLU systems and the baseline. It improves in terms of accuracy, coherence, and
lowered hallucination. In entity recognition, it obtained a 0.88 F1-score, a
0.85 SBERT-based semantic similarity score, and an intent detection accuracy of
92.1%. Additionally, there was a 25% boost in human-rated contextual
appropriateness on top of the baseline. Further evidence of reduced conflicts,
enhanced entity tracking, and increased alignment with user expectations comes
from a thorough mistake analysis. The results show that the method works well
for practical, ever-changing uses like AI assistants and customer service. To
further human-centric conversational AI, this work shows how dialogue memory
techniques and contextual embeddings might make conversations more meaningful
and logical. Based on these findings, future work will focus on multilingual
adaptability, low-resource optimisation, and deployment in edge environments. |
|
Keywords: |
Chatbot Response, Natural Language Understanding, Transformer Model, Semantic
Score, Optimisation |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING COMMUNICATION PROTOCOL DESIGN FOR ENERGY CONSERVATION IN IOT NETWORKS |
|
Author: |
PANYA MANOJ SHARMA ,DESIDI NARSIMHA REDDY ,KALYANAPU SRINIVAS , MUKESH MADANAN ,
ELANGOVAN MUNIYANDY , A.SMITHA KRANTHI |
|
Abstract: |
The explosion in Internet of Things (IoT) installations has connected millions
of low-power devices, which poses serious problems for maintaining reliable,
energy-efficient connectivity in diverse settings. Traditional hybrid
optimisation approaches like Particle Swarm Optimisation (PSO)-Low Energy
Adaptive Clustering Hierarchy (LEACH) integrated with Random Forest (RF) improve
clustering and traffic forecasting. Still, they struggle with the dynamic,
heterogeneous traffic of smart cities, industrial IoT, and environmental
monitoring. Emerging Deep Reinforcement Learning (DRL) algorithms, particularly
Proximal Policy Optimisation (PPO), enable real-time routing and Medium Access
Control (MAC) regulation but have not been extensively compared with hybrid
algorithms in unified, realistic scenarios. This paper presents a thorough,
application-oriented comparative analysis of a DRL protocol based on PPO and a
PSO-LEACH + RF hybrid system, both of which are designed for energy-aware IoT
communication. Both methods were evaluated using the same simulation scenario
with consolidated traffic data reflecting the periodic, event-driven, and bursty
nature of environmental monitoring, smart home automation, and industrial IoT
applications. Key metrics include energy consumption, throughput, average packet
delay, and packet delivery ratio (PDR). Results show that the hybrid model used
0.2–1.5 J less energy, while the PPO-DRL model achieved up to 23 kbps higher
throughput, 8–10 ms lower latency, and up to 2% higher PDR. |
|
Keywords: |
Internet of Things (IoT), Deep Reinforcement Learning (DRL), PSO LEACH, Energy
Efficiency, Quality of Service (QoS) |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
NUMERICAL METHODS FOR SOLVING NONLINEAR DIFFERENTIAL EQUATIONS IN INFORMATION
NETWORK SECURITY PROBLEMS |
|
Author: |
MARYNA BELOVA, VOLODYMYR DENYSENKO, SVITLANA KARTASHOVA, VALERIJ KOTLYAR,
STANISLAV MIKHAILENKO |
|
Abstract: |
The increasing complexity and dynamism of modern information networks makes the
problem of their resistance to threats increasingly important. The use of
differential equations, in particular, variants of epidemiological models, is
one of the promising approaches to simulating the spread of harmful effects in
such networks. This study proposes the use of numerical methods for solving
nonlinear differential equations for modelling the dynamics of infection under
different scenarios of cybersecurity threats. A scalable information network
with a dynamic topology based on a stochastic block model is the basis of the
experimental environment. The aim of the research is to determine the most
effective numerical methods for modelling the spread of threats in information
networks, taking into account accuracy, speed, and resistance to changes in
parameters. Generalized models of the MeanField type were used to describe the
spread of influence — both the basic one and its four nonlinear variations with
exponential, logarithmic, quadratic, and power dependence, respectively. The
models were solved using a wide range of numerical methods: classical adaptive
methods (RK45, RK23, Radau, BDF, LSODA), as well as self-implemented schemes
(Adams-Bashforth, Adams-Moulton). Large-scale experiments were conducted with
varying network parameters (size, intensity of connections), initial conditions,
model parameters, and integration step. The analysis was carried out using such
metrics as accuracy (RMSE, Max Error), efficiency (execution time), and
sensitivity to parameters. The obtained results gave grounds to determine the
advantages of specific methods for different types of models and levels of
system complexity. The prospects for further research include expanding models
to multi-level networks, including stochastic components, and developing
intelligent systems for choosing a numerical method in real time. |
|
Keywords: |
Nonlinear Differential Equations, Numerical Methods, Epidemiological Modelling,
Information Security, Dynamic Network. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE CROWD FEEDBACK STRATEGIES FOR IMPROVED WIRELESS NETWORK EFFICIENCY |
|
Author: |
KROVVIDI S B AMBIKA, M.SELVI, PALLAVI SACHIN PATIL, V. RAMA KRISHNA, N.C.
KOTAIAH, AMIT VERMA |
|
Abstract: |
Wireless networks today face increasing performance challenges due to dynamic
conditions like user mobility, fluctuating density and diverse application
demands. Conventional approaches, such as static resource allocation and offline
machine learning (ML), lack the adaptability to respond effectively to real-time
variations. To address these limitations, this research presents an Adaptive
Crowd Feedback Strategy that combines the live, trust-filtered user feedback
with a closed-loop optimisation system. The suggested framework includes 4 core
modules: feedback collection, trust filtering, a Bayesian reinforcement learning
(RL) engine and network control reconfiguration. Researchers use mathematical
models to combine Quality of Experience (QoE) as well as Quality of Service
(QoS) metrics, implement Bayesian inference to make policy changes and queueing
theory to predict how the network will perform. Many real-world and fabricated
datasets, like more than 18,000 mobile session logs, were used in the
simulations. When contrasted with static as well as offline ML-based systems,
the results show big performance improvements, with up to 35% more throughput,
30% less latency and over an additional 20% of energy efficiency. The adaptive
system also achieves quicker convergence, making it highly responsive to
changing network conditions. Comparative evaluation highlights the system’s
ability to maintain a higher packet delivery ratio and minimal congestion by
smarter, feedback-driven decisions. Practical issues such as computational
trade-offs, feedback dependability and scalability are highlighted in the
discussion of the results. The promising uses in forthcoming 5G/6G, smart city
and edge computing infrastructures, the research finds that the suggested
adaptive model improves real-time network performance while also laying the
groundwork for smart, user-aware, as well as energy-efficient wireless
communication. |
|
Keywords: |
Wireless Network, Machine Learning, Bayesian Inference, Packet Delivery Ratio,
Congestion, Energy Efficiency. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE DEEP LEARNING FRAMEWORK USING BAYESIAN OPTIMIZATION FOR AUTISM SPECTRUM
DISORDER PREDICTION FROM SCREENING DATA |
|
Author: |
B. DEEPA , Dr.K.S. JEEN MARSELINE |
|
Abstract: |
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition marked
by persistent challenges in communication, behaviour regulation, and social
interaction. The heterogeneity of symptoms across individuals and age groups
complicates early detection, as behavioural traits often overlap with other
conditions or remain masked until later developmental stages. Traditional
diagnostic methods are usually time-intensive, subjective, and rely on
specialist interpretation, leading to delayed or inconsistent identification.
Screening data offers a scalable, cost-effective, and non-invasive alternative
for early ASD prediction, capturing observable traits through structured
behavioural questionnaires. To overcome diagnostic inconsistencies and optimize
model performance, this research proposes a Bayesian Optimization based Long
Short-Term Memory (BO-LSTM) framework that adaptively learns temporal
dependencies in screening responses while automatically tuning its parameters
using a probabilistic surrogate model. The model was evaluated using the Autism
Screening Dataset, comprising 6075 records and 20 structured attributes, sourced
from a mobile-based application developed by Dr. Fadi Fayez. The dataset
includes behavioural inputs from toddlers, children, adolescents, and adults,
with questionnaires tailored to each age group. BO-LSTM achieved a
classification accuracy of 74.375%, along with notable gains in sensitivity,
specificity, and interpretability. These results demonstrate the framework's
effectiveness in processing sequential screening data for timely and reliable
ASD prediction across diverse age groups. |
|
Keywords: |
Autism Spectrum Disorder, Prediction, Screening Data, Deep Learning, Long
Short-Term Memory, Bayesian Optimization |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
LIC-NET: LIGHTWEIGHT INTEGRATED CONVOLUTIONAL NETWORK FOR ACCURATE POLYP
SEGMENTATION |
|
Author: |
SUCHITRA A. PATIL, CHANDRAKANT GAIKWAD |
|
Abstract: |
The segmentation of a small polyp present in intestinal regions, which tend to
be malignant, is a basic and essential task for the detection of colon cancer.
Segmenting a small polyp is challenging due to the higher similarity of tissues.
Inaccurate segmentation results in higher false positives for colorectal cancer
classification. This work suggests a two-stage deep learning network for polyp
segmentation. In the first stage, the colonoscopy image is preprocessed to
generate salient regions. In the second stage, the salient regions are processed
by a novel U-Net structure network called LIC-Net, which integrates transfer
learning and multiscale feature extraction to increase the segmentation
accuracy. Testing with Kvasir-Seg and CVC-ClinicDB, both in direct and
cross-learning mode, the proposed solution achieved more than 90% accuracy. The
false positives are at least 2% lower compared to the most recent deep learning
based segmentation works. |
|
Keywords: |
Convolutional Neural Network, Fuzzy-TAN, Colonoscopy, Dilated Convolution, Polyp
Segmentation, U-Shaped Model |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
A REVIEW OF THE ECONOMIC SOCIOLOGY OF DIGITAL MARKETS: THE DYNAMICS OF SOCIAL
INTERACTION IN E-COMMERCE PLATFORMS |
|
Author: |
M. RASYID RIDHA, ILHAM SAMUDRA SANUR, REZKY JUNIARSIH NUR, AHMADIN, M. YUNASRI
RIDHOH |
|
Abstract: |
This research seeks to define, delineate, and examine the evolution of trends in
the Economic Sociology of Digital Markets, emphasizing the dynamics of social
interaction inside e-commerce platforms. Social connection is demonstrated to be
a vital factor that affects customer purchasing intents and decisions, both
directly and indirectly. Social aspects, including trust, community
participation, feedback, online reviews, and emotional experiences, are
increasingly intricately intertwined within social commerce and live-streaming
functionalities. Elements such as reliable followers, electronic word-of-mouth
(eWOM), and the streamer's social presence significantly influence users'
impulse purchasing behavior and emotional involvement. Conversely, social
interactions that are not appropriately tailored to consumers' psychological
situations, such as loneliness, might diminish loyalty and purchasing
inclinations. This study employs a systematic literature review methodology
utilizing a bibliometric approach to examine the trajectory of research
progress. On April 17, 2025, 29 items were retrieved from the Scopus database
using the keywords “social interaction” and “e-commerce platforms.” All data
were examined utilizing the Analyze Search Results function on Scopus.com and
visualized with VOSviewer software version 1.6.20. Twelve pivotal studies from
2019 to 2025 were acquired through the selection process, directly reflecting
the dynamics of social interaction within the realm of e-commerce. Bibliometric
analysis shows a sharp increase in the number of publications, peaking in 2024,
driven by global collaboration, increased funding, and better access to digital
data. By viewing the digital market as an integrated social environment, this
research contributes to information technology (IT) by explaining how the
dynamics of social interaction on e-commerce platforms critically shape consumer
behavior and support a sustainable digital ecosystem. |
|
Keywords: |
Economic Sociology, Social Interaction, E-Commerce Platforms, Systematic
Literature Review |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
TEST CASE COVERAGE ANALYSIS USING CLUSTERING METHODS FOR TEST CASE MINIMIZATION
IN SOFTWARE TESTING |
|
Author: |
SANJAY SHARMA, JITENDRA CHOUDHARY, GOVINDA PATIL |
|
Abstract: |
In test case minimization especially in software testing large-volume datasets
are minimized the purpose is to separate unnecessary and duplicate datasets. The
coverage of minimized datasets will be the same as original data sets. At
present thousand techniques are available for dataset minimization and
clustering is one of the main. The objective of the proposed paper is to justify
different clustering methods, test case coverage and their role of in software
testing, apart from the reason for choosing particular methods. As initial stats
all clustering methods require data sets, data sets are generated by programmers
or by automation tools. Duplicate data sets are the by-product of automation
tools. This paper is based on an analysis of different test case coverage and
test case minimization especially using clustering methods. |
|
Keywords: |
Test Case Minimization, Coverage Analysis, Data Mining, Clustering, Test Case
Coverage |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING GESTATIONAL DIABETES MELLITUS PREDICTION USING WHITE TIGER SWARM
OPTIMIZATION-ENHANCED MULTILAYER PERCEPTRON (WTSO-MLP) |
|
Author: |
D.SHOBANA , V VINODHINI |
|
Abstract: |
Gestational Diabetes Mellitus (GDM) is a critical pregnancy-related complication
that affects both maternal and neonatal health. Although commonly used for
diagnosis, the Oral Glucose Tolerance Test (OGTT) is invasive, time-consuming,
and fails to provide early detection. Inconsistent screening guidelines further
complicate the identification of high-risk pregnancies, emphasizing the need for
more accurate and timely predictive tools. This research develops a robust,
non-invasive prediction model for GDM risk using White Tiger Swarm
Optimization-enhanced Multilayer Perceptron (WTSO-MLP). The goal is to enhance
early detection by integrating bio-inspired optimisation techniques to improve
model performance while reducing dependency on invasive tests, such as OGTT. The
WTSO-MLP model combines White Tiger Swarm Optimisation (WTSO) with Multilayer
Perceptron (MLP) to optimise weight configurations, trained on a dataset of
3,525 instances that contain clinical and demographic data. Class imbalance has
been addressed through adaptive techniques. Model performance has been evaluated
using the Matthews Correlation Coefficient (MCC), Error Rate, Youden’s Index,
and Critical Success Index (CSI). This study contributes new knowledge by
demonstrating how a bio-inspired optimization strategy can simultaneously refine
neural network parameters and feature subsets using prospectively collected
data, achieving superior accuracy, early detection capability, and adaptability
across diverse clinical settings. The WTSO-MLP model outperformed traditional
methods, achieving high performance in GDM prediction, especially for
early-stage detection. The model demonstrated improved generalization, reduced
misclassifications, and higher MCC scores, making it a reliable tool for
clinicians. The WTSO-MLP model provides an innovative, efficient solution for
early GDM risk prediction, improving diagnostic accuracy, generalization, and
interpretability. It can seamlessly integrate into clinical workflows to enable
early, non-invasive GDM assessments, ultimately enhancing maternal and fetal
health outcomes. |
|
Keywords: |
Healthcare, Diabetes, GDM, WTSO-MLP, Swarm Optimization |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING HEALTHCARE SECURITY WITH BLOCKCHAIN-POWERED SMART CONTRACTS |
|
Author: |
DR. NAIM SHAIKH, DR. MAMATHA G, KUKATI ARUNA KUMARI, DR. M.S.GIRIDHAR, DR.
KRISHNA NAND MISHRA, DR. A.PANKAJAM, DR. GANESH KUMAR R, SWATI GUPTA |
|
Abstract: |
The rationale behind this research stems from the increasing frequency of data
breaches in healthcare and the inadequacy of centralized systems to ensure
privacy, interoperability, and regulatory compliance. The Present study
emphasizes the importance of applying security in healthcare. This model was
prepared by utilizing Smart Contracts. It has been noted that there are some
emerging concerns about data security and privacy as well as interoperability
within healthcare organizations. The focus of a research paper is on the
deployment of Smart Contracts along with blockchain technologies. The
fundamental vision is to improve healthcare infrastructure’s security.
Blockchain is transforming healthcare systems for the better by eliminating
inefficiencies caused by fraud and outdated technologies, allowing for the
efficient, transparent, and secure issuance of Smart Contracts. The challenges
of confidentiality, data security, and access to relevant patient information
for medical professionals have been a problem in the healthcare sector. Most of
the existing EHR systems do not have adequate mechanisms for enforcing security
access controls, which hampers cooperation between healthcare institutions.
These security concerns pose risks for patients’ privacy and cripple the
adoption of modern information technology within the health sector. Simulation
works shows that Transaction processing time in case of proposed model is below
1.5 second where as it is 2.5 in case of conventional model. Security breach
probability of proposed model has been reduced to 0.05 that was 0.35 in case of
conventional model. Data integrity verification time in case of proposed model
is below 1.0 that is above 1.75 in case of conventional model. While with the
existing Electronic Health Record (EHR) systems face limitations in security,
privacy enforcement, and interoperability, this study addresses the lack of
automated, decentralized access control mechanisms. It proposes a
blockchain-powered Smart Contract model to fill these gaps and enhance
healthcare data governance and trust. |
|
Keywords: |
Blockchain, Smart Contract, Digital transformation, Healthcare system, Data
security. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
MACHINE LEARNING-DRIVEN IMPROVEMENTS IN SOFTWARE DELIVERY PIPELINES |
|
Author: |
RAYAVARAPU SRIDIVYA,BEHARA VENKATA NANDAKISHORE,DR VED SRINIVAS, DR. R. NIRUBAN,
RAJESH TULASI, AMIT VERMA |
|
Abstract: |
Modern software delivery relies heavily on Continuous Integration as well as
Continuous Deployment (CI/CD) pipelines, yet these processes are frequently
hindered by inefficiencies, including longer build times, faulty testing, and
inefficient use of resources. This paper introduces a novel framework combining
Graph Neural Networks (GNNs) and Reinforcement Learning (RL) to intelligently
optimise CI/CD processes in DevOps environments. Generalised neural networks
(GNNs) accurately forecast failure sites and bottlenecks by modelling the
structural relationships among activities in a pipeline. RL agents dynamically
learn optimal scheduling policies that reduce build time and resource usage
through adaptive feedback mechanisms. The main contributions include 1) an RL
structure that dynamically assigns computational resources to minimise build
latency and test flakiness, 2) a hybrid GNN+RL architecture that collaboratively
improves both predictive accuracy as well as decision-making efficiency, and 3)
a GNN-based method for identifying critical path nodes as well as failure-prone
components in the pipeline graph. Through experimental assessment on real-world
DevOps pipelines, in comparison to baseline and RL-only techniques, it can
reduce CPU consumption by 25%, increase failure prediction rate by 49%, and
decrease build time by 38%. To create intelligent, self-healing CI/CD
infrastructures, our study connects the gaps between optimisation driven by AI
and actual DevOps. |
|
Keywords: |
CI/CD, DevOps, Graph Neural Networks, Reinforcement Learning, Pipeline
Optimisation, Failure Prediction, Resource Allocation |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
CONTINUAL LEARNING-DRIVEN LAPLACIAN PALE TRANSFORMATIVE CONVOLUTION NETWORK
FRAMEWORK FOR DIABETIC RETINOPATHY DETECTION FROM RETINAL FUNDUS IMAGES |
|
Author: |
K.S.NALINI, Dr.ARUNACHALAM A.S |
|
Abstract: |
Diabetic retinopathy (DR) is a diabetes-related eye ailment caused by retinal
blood vessel (BV) damage. This manuscript presents a novel continual DL
framework for efficient DR disease detection. Initially, input retinal fundus
images are taken from the IDRI Dataset for accurate DR disease detection, which
undergoes the preprocessing stage by employing a Color Wiener Filter (CWF) that
can enhance image clarity by adaptively removing noise while maintaining edge
details for further processing. After preprocessing, a novel Laplacian Pale
Transformative Convolution network (LPTCN) is introduced, which classifies with
more accuracy the distinction between different DR abnormalities. Moreover, the
proposed framework integrates Elastic Weight Consolidation (EWC) and Herding
Selection Replay (HSR) to prevent catastrophic forgetting on new data samples.
The proposed framework is simulated in the Python platform. In the simulation
part, the average Accuracy of 97%, Matthew’s Correlation Coefficient (MCC) of
0.936, symmetric mean absolute percentage error (SMAPE) of 2.07, and Computation
Time (CT) of 4.9s, Youden’s index (YI) of 0.89 are obtained by the suggested
framework on DR identification. |
|
Keywords: |
Diabetic Retinopathy (DR), Retinal Fundus Image (RFI), Multi-class
Classification, Image Preprocessing, Continual Learning, Laplacian Pale
Transformative Convolution Network.
|
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
SMART MEDPLANT: A HYBRID DEEP LEARNING AND RANDOM FOREST MODEL FOR GEOTAGGED
MEDICINAL PLANT DETECTION |
|
Author: |
BEULAH JABASEELI .N, D. UMANANDHINI |
|
Abstract: |
Accurate and scalable medicinal plant identification is critical for
biodiversity conservation, pharmaceutical development, and ethnobotanical
research. Traditional methods, reliant on expert knowledge and manual
morphological analysis, are often labor-intensive, error-prone, and limited in
scope. To overcome this issue, our research work introduces a hybrid model,
TLBR101RF(Transfer Learning Based ResNet101 with Random Forest), to find the
medicinal plants from it leaf image with high accuracy. The model employs
ResNets’s deep residual blocks for effective hierarchical feature extraction and
Random forest is used for classifying the result based on robustness against
over-fitting and promote generalization on imbalanced datasets. The process
consist of data acquisition , data augmentation by GAN , segmentation by LeGrNet
and further involved in Classification of leaf image. TLBR101RF achieves a high
classification accuracy of 98.69%, when compared with CNN models like VGG16,
MobileNet and Inception. The incorporation of geolocation metadata improves
classification accuracy by accounting for regional morphological changes
influenced by environmental factors. A GPS-enabled web application was developed
for real-time species identification, relevant to conservation, agriculture and
pharmaceutical research. In TLBR101RF, Deep learning and Artificial Intelligence
are combine to form best solution for identifying medicinal plants accurately.
The results indicate that TLBR101RF effectively categories medicinal plants with
high accuracy of 98.61%, while comparing with CNN models like MobileNet (97.7%),
ResNet50(97%), VGG16(96.5%) creating an effective application for identifying
correct medicinal valued plants from it localization by achieving high accuracy
of 98% by using ResNet101 and Random Forest algorithms. |
|
Keywords: |
Pharamaceuticals, Transfer Learning, ResNet101, Random Forest, localization |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
QUANTUM CRYPTOGRAPHY FOR SECURE CLOUD DATA STORAGE AND TRANSMISSION |
|
Author: |
RAHUL SURYODAI, DESIDI NARSIMHA REDDY, DR. HEMLATA MAKARAND JADHAV, ANIL KUMAR
MUTHEVI, DR. V. V. R. MAHESWARA RAO, V SIVARAMARAJU VETUKURI |
|
Abstract: |
Traditional encryption algorithms like RSA and ECC are increasingly vulnerable
due to advances in quantum computing, which enable attacks through techniques
such as Shor’s and Grover’s algorithms. To address this challenge, the
researchers proposed a hybrid encryption system that integrates Quantum Key
Distribution (QKD) with the Advanced Encryption Standard (AES-256) to ensure
secure data transmission and storage in cloud environments. The system employs
the BB84 protocol over a virtual 50 km quantum channel for key generation and
distribution. Additionally, introduce the Hybrid Secure Transmission Protocol
(HSTP) that rotates session keys every 5 seconds, enhancing security through
continuous key renewal. By adapting the Key Management Service (KMS) to utilise
QKD-generated keys, this approach is compatible with popular cloud platforms
such as AWS S3 and Google Cloud Storage. Experimental evaluations comparing
AES+QKD with conventional AES+RSA demonstrate that AES+QKD achieves higher key
generation rates (>1050 MB/s), superior data consistency (99.9%), and maintains
low latency under heavy workloads, while effectively resisting both classical
and quantum attacks. This work presents a scalable, quantum-safe cloud security
architecture, showcasing the practical integration of QKD in large-scale cloud
infrastructures through the novel HSTP protocol and validated performance
models. |
|
Keywords: |
Quantum Key Distribution (QKD), AES-256 Encryption, Cloud Data Security,
Post-Quantum Cryptography, Hybrid Secure Transmission Protocol (HSTP). |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID DEEP ATTENTION-BASED EMOTION RECOGNITION USING TEMPORAL-SPATIAL
OPTIMIZATION FOR MULTI-SUBJECT VIDEO ANALYSIS |
|
Author: |
DR. RAMARAJU NAGARJUNA KUMAR, DR. G. PRASANNA LAKSHMI, DR. RABINS PORWAL, DR. C.
MADHUSUDHANA RAO, SATYANARAYANA MURTHY VALLABHAJOSYULA, M. LAKSHMANA KUMAR, DR.
M. NAGABHUSHANA RAO |
|
Abstract: |
Group-level emotion recognition (GER) is essential in applications involving
human-computer interaction, public surveillance, and affective computing. This
paper proposes a novel hybrid framework that integrates Enhanced Particle Swarm
Optimization (EPSO) for feature selection with Recurrent Neural Networks (RNN)
for modeling temporal emotion dynamics in video sequences. The system begins
with preprocessing and frame extraction, followed by deep and statistical
feature extraction. EPSO is employed to select the most informative features,
which are then input into an RNN for sequential emotion prediction. Evaluations
conducted on the AFEW dataset demonstrate that the proposed EPSO-RNN model
outperforms traditional classifiers such as CNN, VGG-16, and SVM in terms of
accuracy, precision, recall, and F1 score. The EPSO-RNN model demonstrated
smooth convergence with minimal overfitting, aided by early stopping. The
training accuracy peaked at 92.3%, with a validation accuracy of 89.4%,
outperforming CNN (78.2%), VGG-16 (82.5%), and SVM (69.7%). Corresponding loss
curves showed a steady decline, reinforcing the model’s stability. The results
affirm the robustness and scalability of the proposed approach in complex,
real-world group emotion recognition scenarios. |
|
Keywords: |
Affective Computing, Emotion Recognition, Attention Mechanism, Temporal-Spatial
Optimization, Deep Learning, Multi-Subject Video Analysis. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID DRL-CTO: OPTIMIZATION DRIVEN DEEP LEARNING FRAMEWORK FOR CEREBELLAR
ATAXIA BASED ON HUMAN GAIT ANALYSIS |
|
Author: |
EDARA SREENIVASA REDDY, SUNIL PREM KUMAR PRATHIPATI |
|
Abstract: |
This study presents a Hybrid Deep Repeated Learning with Competitive Tuning
Optimization (Hybrid DRL-CTO) framework for detecting and classifying Cerebellar
Ataxia (CA) using human gait analysis. The proposed approach integrates a
repeated learning mechanism with the CTO algorithm to enhance parameter
optimization, improve convergence, and reduce computational complexity. Deep S3P
features extracted from regions of interest in gait frames enable the model to
capture discriminative representations while minimizing feature dimensionality,
thereby improving classification accuracy. Experimental evaluations demonstrate
that Hybrid DRL-CTO achieves 96.8% accuracy, 95.1% precision, and 97.1% recall,
consistently outperforming existing techniques in k-fold validation. The
combination of CNN-LSTM with the CTO algorithm further prevents overfitting and
ensures reliable performance, offering a robust solution for CA classification
with high efficiency and reduced computational effort. |
|
Keywords: |
Cerebellar Ataxia, Repeated Learning, Deep Learning, Gait Analysis, Competitive
Tuning Optimization. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
BLOCK CHAIN BASED SECURED EHR WITH UNIFIED SIGNATURE ENCRYPTION SCHEME |
|
Author: |
VENKATESWARAN S , VIJAYARAJ N |
|
Abstract: |
Electronic Health Record(EHR) manage medical information in digital form and
allow the patient to view their information and distribute the same to the
doctors to diagnoses. Identity-based cryptosystem, specifically Unified
Signature-Encryption Scheme(US-ES) is designed to enable the efficient and
secure exchange of healthcare information within a data-sharing network.
Leveraging bilinear pairings, the proposed system integrates a distributed,
tamper-proof database that replicates health records across peer-to-peer (P2P)
network. EHRs are treated as individual events, timestamped, and assigned a
cryptographic hash. To ensure transparency and data reliability, these entries
are organized into transaction blocks and each node in the P2P network maintains
a copy of the ledger. Additionally, the healthcare blockchain includes user
permission lists, which dictate access control and serve as essential network
instructions. This framework ensures secured exchange of EHR both within and
between medical institutions, eliminating the need for a third-party provider.
To guarantee the integrity of the EHRs, US-ES cryptographic technique is applied
on the blockchain, ensuring the security of decentralized healthcare data
exchanges. The US-ES mechanism combines the functionalities of both digital
signatures and encryption, providing confidentiality, authenticity, and
efficiency in data transmission. Ultimately, the proposed US-ES technique is
combined with digital signature to present a robust and secure method for
healthcare data exchange, significantly reducing vulnerabilities associated with
traditional systems. By removing the necessity for a trusted third party, the
proposed solution enhances both the security and integrity of EHR transactions,
fostering safer data sharing in the healthcare. |
|
Keywords: |
Blockchain, Electronic Health Record(EHR), Unified Signature-Encryption (USE),
Identity-Based Cryptosystem (IBC). |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
DEPTH-FIRST SEARCH (DFS) BASED CHARACTER RECOGNITION USING GRAPH MATCHING
ALGORITHMS |
|
Author: |
M.SARAVANAKUMAR ,Dr.S.KANNAN |
|
Abstract: |
This study focuses on full Multi lingual Printed character English Tamil,
Malayalam, Hindi character recognition in UTF-8 encoding, utilizing both Times
New Roman and Courier New font variations and hand written variation of styles
Recognition. The proposed method employs Shi-Tomasi corner detection combined
with a Depth-First Search (DFS) approach for adjacency matrix transformation.
The adjacency matrix, represented as binary values (0 or 1), undergoes
conversion and swapping to facilitate comparison across all scale variants,
determining similarity using hierarchical matching techniques. The recognition
process follows a structured approach: characters are matched based on the
highest degree first, followed by the next highest degree, and subsequently,
partial exact character recognition is conducted. The recognized character
matrix is then utilized to print Unicode values and generate recognized
character text. The recognized text is systematically converted into a Word file
and saved, while all recognition input characters are preserved in a designated
folder. Additionally, recognized text is stored in a structured format for
further analysis. The system exclusively visualizes the program's input plot,
corner detection plot, and shell recognition text. Furthermore, character
recognition is enhanced using a comparative approach based on Graph Edit
Distance, ensuring accurate shape differentiation between the two font styles.
Character recognition has been extensively studied using feature-based and
machine learning approaches, yet graph-theoretic methods remain underexplored
despite their potential for providing structural interpretability. A particular
knowledge gap exists in the literature regarding the application of Depth-First
Search (DFS) traversal on adjacency matrices derived from character images.
While prior studies focus on generic graph isomorphism or statistical
descriptors, they do not systematically evaluate the role of DFS traversal in
capturing distinctive structural patterns of characters for recognition
purposes. This study addresses that gap by proposing a DFS based character
recognition framework combined with graph matching algorithms. Characters are
represented as graphs constructed from corner-detected nodes and their adjacency
relationships, and recognition is achieved by comparing DFS traversal sequences
with predefined templates. The framework is tested on English (uppercase and
lowercase), Tamil, Hindi, and Malayalam characters, including both printed fonts
(Times New Roman, Courier New) and handwritten style variations. The results
demonstrate that DFS traversal effectively captures structural uniqueness,
achieving an average accuracy of 98% for both printed and handwritten datasets.
The approach also provides insights into computational efficiency and
misclassification patterns, particularly in noisy or structurally similar
characters. The novelty of this work lies in its systematic integration of DFS
traversal with graph matching for character recognition, offering a transparent
and interpretable alternative to black-box machine learning models. The new
knowledge created by this research is the demonstration that DFS based graph
traversal is not only computationally feasible but also robust across multiple
languages and character styles, thus contributing a structural recognition
paradigm that complements existing CR techniques. |
|
Keywords: |
UTF-8 Character Recognition, Shi-Tomasi Corner Detection, Adjacency Matrix,
Depth-First Search (DFS) Using Graph-Based Character Recognition, Times New
Roman & Courier New Font, Graph Edit Distance, Graph Based Matching (GBM). |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
IMPROVED BINARY DIFFERENTIAL EVOLUTION BASED CLASSIFIER SELECTION IN STACKED
ENSEMBLE FRAMEWORK FOR EFFECTIVE DECEPTION SYSTEM |
|
Author: |
VIJAYASREE BODDU , PRAKASH KODALI |
|
Abstract: |
Deception is a state that refers to acting in a way that causes another person
to believe something that is not true. Deception is a national security concern
when investigating crimes. Accurate deception system is a critical challenge in
criminal analysis necessitating the development of efficient predictive models.
Despite advancements in ensemble-based machine learning models, selecting
diverse classifiers to enhance model performance remains a significant hurdle.
Past efforts to boost classification accuracy using ensemble learning
encountered constraints. To enhance deception performance in this study a two
level stacking framework is proposed. Selecting best classifier combination for
this stacking framework is a hard problem. This problem is formulated as
combinatorial optimization problem and attempted to solve using Binary
Differential Evolution (BDE) algorithm. For effective solution and better
results, in the proposed approach, base learners are encoded using binary
encoding, while meta learners are encoded using one-hot encoding. Further, the
proposed BDE uses dynamic mutation scaling factor and cross over rate. Finally,
the continuous solution space is converted using sigmoid transfer function
followed by thresholding. In this study nine diverse base learners and four meta
learners are used to construct the stacking ensemble model. The proposed
framework optimizes the combination of these classifiers using BDE, aiming to
improve predictive performance. Our optimization process relies on a fitness
function derived from the ensemble accuracy score and number of classifiers. A
Concealed Information Test (CIT) is performed to collect the deception dataset.
This dataset is used to evaluate the proposed model. The proposed model
outperformed in terms of accuracy, sensitivity, specificity and F1-score when
compared with State-Of-The-Art (SOTA) models in the literature. Next, when the
proposed model was compared with state-of-the-art (SOTA) ensemble models, it not
only achieved the best performance in terms of accuracy and sensitivity, but
also reduced the ensemble model complexity drastically. Our findings demonstrate
that improvements in performance across many metrics, highlighting the
effectiveness of the BDE-based ensemble approach in designing a more accurate
deception system. |
|
Keywords: |
Binary Differential Evolution, Deception Detection, Ensemble Learning,
Optimization, Stacking |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE CARIBOU DEFENSE PROTOCOL (ACDP): A BIO-INSPIRED INTRUSION DETECTION
FRAMEWORK FOR SECURING IOMT NETWORKS |
|
Author: |
BASIL BABY K, Dr. A. NITHYA RANI |
|
Abstract: |
Network security has become a critical concern due to the increasing complexity
of cyber threats targeting interconnected systems. The “Internet of Medical
Things” (IoMT) has transformed healthcare by enabling real-time monitoring,
remote diagnostics, and automated medical interventions. Integrating IoMT
devices into healthcare infrastructures exposes networks to security
vulnerabilities, requiring robust intrusion detection mechanisms. “Host-based
intrusion Detection Systems” (HIDS) provide a localized security approach,
monitoring system logs, processes, and behaviors to detect unauthorized
activities. Traditional detection techniques often struggle with evolving
threats and resource limitations in IoMT environments. Bio-inspired optimization
techniques offer adaptive security enhancements, refining detection mechanisms
while minimizing computational overhead. The Adaptive Caribou Defense Protocol
(ACDP) leverages nature-inspired intelligence to optimize intrusion detection,
ensuring enhanced security resilience. By integrating bio-inspired approaches
with HIDS, intrusion detection frameworks can achieve improved adaptability,
real-time threat identification, and efficient security enforcement across IoMT
networks, mitigating emerging cyber risks effectively. |
|
Keywords: |
Host Intrusion Detection Systems - Internet of Medical Things - Intrusion
Detection – Cybersecurity in Healthcare - Caribou Optimization |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
MACHINE LEARNING IN IMAGE PROCESSING DRIVING INNOVATION IN VISUAL DATA ANALYSIS |
|
Author: |
DEVAKI KUTHADI, ARCHANA NAGELLI, PARUCHURI JAYASRI, SUSHUMA NARKEDAMILLI, KARRI
L GANAPATHI REDDY, ANDE SASI HIMABINDU, C. GNANAVEL |
|
Abstract: |
This work introduces a new model based on Hybrid CNN and GAN for image
processing and use in medical and natural image analysis. We aim to see if using
CNNs for extraction and GANs for enhancement makes both image improvement and
the classification process more accurate. In this work, we used medical CT
scans, MRIs, and natural images and measured performance by calculating
accuracy, precision, recall, F1-score, PSNR, and SSIM. Compared with existing
architectures such as VGGNet and ResNet, the proposed Hybrid CNN–GAN achieved
superior medical image classification accuracy of 95.2% and delivered enhanced
image quality with PSNR of 37.2 dB and SSIM of 0.97. The model also demonstrated
strong performance on natural image datasets, confirming its ability to
generalize across domains. These results indicate that integrating CNN-based
feature extraction with GAN-based enhancement improves both classification
accuracy and perceptual image quality in medical and natural image processing
tasks. This work provides more precise methods of collecting and studying
images, which can significantly improve many fields that depend on analyzing
precise visual data, such as the healthcare and autonomous vehicles sectors. |
|
Keywords: |
Hybrid CNN, Generative Adversarial Network, Image Processing, Medical Imaging,
Image Enhancement, Deep Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
SEFEX: A NOVEL APPROACH TO ACCURATE FACIAL EMOTION DETECTION USING EFFICIENTNET
AND ATTENTION MECHANISMS |
|
Author: |
S SAHAYA SUGIRTHA CINDRELLA , R JAYASHREE |
|
Abstract: |
Facial emotion detection has become a pivotal technology in applications ranging
from human–computer interaction to significant challenges in accurately
recognizing emotions, especially when expressions are subtle, overlapping, or
distorted by variations in lighting, pose, or noise. These limitations reduce
reliability in real-world applications and create the need for more robust
solutions. To address these challenges, this paper introduces the SEFEX (Single
Emotion Facial Expression Detection) model, which combines advanced
preprocessing techniques, EfficientNet-based feature extraction, and an
attention mechanism to improve recognition accuracy. The model is evaluated on a
comprehensive dataset encompassing eight emotion categories: happiness, sadness,
anger, surprise, fear, disgust and neutral mental health monitoring for security
and customer experience management. However, existing approaches face contempt.
Preprocessing steps such as bilateral filtering for noise reduction, facial
alignment using Haar Cascades, and landmark detection ensure consistent and
high-quality inputs for training. EfficientNet is then employed to extract
robust features, while the attention mechanism highlights key facial regions,
enabling the model to capture subtle expression differences. SEFEX achieves an
accuracy of 95.34%, outperforming state-of-the-art models including VGG16,
ResNet50, and DenseNet. These results demonstrate SEFEX’s capability to enhance
emotion recognition reliability, making it suitable for real-time applications
in healthcare, customer service, and human-computer interaction. |
|
Keywords: |
Facial Emotion Detection, Deep Learning, Single Emotion Facial Expression
Detection (SEFEX), EfficientNet, Feature Extraction, Attention Mechanism,
Human-Computer Interaction, Noise Reduction, Image Preprocessing, Emotion
Recognition. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
IMPROVED GREY WOLF OPTIMIZATION BASED BAND SELECTION FOR HYPERSPECTRAL REMOTE
SENSING IMAGE CLASSIFICATION |
|
Author: |
MAANVITHA M D, PHANEENDRA KUMAR B L N |
|
Abstract: |
The selection of bands in hyperspectral imagery represents a significant
challenge within the remote sensing discipline, attributable to the dataset's
high dimensionality and the presence of redundant information across the various
bands. This manuscript introduces a novel framework aimed at the classification
of hyperspectral images. To identify the optimal set of bands, an enhanced grey
wolf optimization technique employing a non-linear function is implemented,
which iteratively refines the band configurations based on the optimal,
suboptimal, and tertiary solutions, mirroring the roles of the "alpha," "beta,"
and "delta" wolves, respectively. Subsequently, the intrinsic information from
the band set is extracted through a spatial filter that isolates the albedo
component. A non-linear Support Vector Machine is trained with the extracted
features. Empirical results derived from the Indian Pines and University of
Pavia datasets confirm the effectiveness of the proposed framework in attaining
superior classification accuracy over the state-of-the-art methods by accurately
identifying appropriate bands and categorizing all varieties. The empirical
quantitative and qualitative results validate the efficacy of the proposed
framework. The proposed methodology holds significant relevance for land use
analysis, environmental monitoring, and remote sensing applications, as it
markedly reduces the number of bands utilized while maintaining classification
performance. |
|
Keywords: |
Hyperspectral Images, Support Vector Machine, Grey Wolf Optimization, Band
selection |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
TRANSFORMING THE MONETARY GIFT EXPERIENCE: QUANTITATIVE EVALUATION OF A MOBILE
APP WITH PLAYFUL INTERACTIONS USING THE IVPRD INDEX |
|
Author: |
ALAN M. INFANTE VIDALON, ROSARIO D. OSORIO CONTRERAS, WILSON A. LAZO TAPIA |
|
Abstract: |
The development of this study confirms the perceived value of digital gifts
through the (IVVPRD), which is a multidimensional construct for the evaluation
of mobile applications, which digitizes cultural practices for money. The
objective of the study is to prevent the gap between economic digitalization and
traditional cultural practices. Using a quasi-experimental design with 100
participants from Peru, a mobile application with fun elements was evaluated
relatively against traditional methods (physical envelopes). IVPRD consists of
four dimensions: monetary value (VM), interaction efforts (EI), social/cultural
context (SC), and experience experience (EC), which is expressed with the
equation IVPRD = 0.31 (VM) 0.28 (EI) 0.32 (SC) - 0.12 (EC). The results showed
the durability of the model (r² = 0.847), revealing that the digital application
has reached significantly higher values (IVPRD = 7.49 ± 0.75) against the
traditional method (IVPRD = 5.10 ± 0.68). Interaction efforts (EI) was found to
be the most important difference between the methods (5.23 points), while the
social/cultural context (SC) appeared as a factor with the highest predictive
weight (β = 0.32). Significant differences were observed according to
demographic and contextual variables: preference for new users (18-24 years:
4.10 points) and the use of socioeconomic status (18-24 and 3.70 respectively),
as well as the inverse correlation between formality and apartment choice. The
experience analysis identified "game setup" as the most important point for
improving the success rate (89.5%) and moderate disappointment (2.68/10). This
study shows that the inclusion of recreational elements and a social and
cultural context significantly transform perceptions of value in applications,
providing a methodological framework that relates to culturally sensitive
FinTech solutions |
|
Keywords: |
User Experience, Gamification, Fintech, Digital Monetary Gift, Mobile App. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
Title: |
TRANSFER LEARNING BASED ROBERTA WITH SMOTE: A NOVEL APPROACH FOR TWITTER
SENTIMENT ANALYSIS FOR HATE SPEECH |
|
Author: |
REKHA JANGRA , ABHISHEK KAJAL |
|
Abstract: |
In the current digital era, a big spike of social interactions among people on
various social networking sites have been witnessed around the globe. X platform
(erstwhile Twitter) has turned leading platform for users to express their
opinion on different burning topics related to social, political, religious
domains. Though such discussions are healthy for a politically and socially
active dynamic generation, but unfortunately many tweets carry hate speech. In
past few years, India too witnessed the exponential rise of posts containing
hate speech. Hence, we introduced a deep learning technique for catching the
hate speech generated posts using X platform as our source for the dataset to
classify the posts sentiments as negative and positive. Current research has
assessed traditional ML methods, including SVM, Naïve Bayes, and Random Forest,
alongside DL techniques. In light of the limitations of current methods, the
suggested work offers a solution by merging RoBERTa with transfer learning.
During preprocessing, the crawled Twitter data has been filtered, case-folded,
and stemmed. Subsequently, stop word removal and tokenization have been
executed. Data labeling has been conducted via deep learning algorithms. A word
cloud has been generated, and a frequency chart has been produced based on
positive and negative sentiments. Ultimately, the accuracy and error rates of
LSTM, BERT, optimized RoBERTa, and Hybrid transfer learning-based RoBERTa with
smote have been simulated. Simulation results indicate that LSTM attains 94.99%,
BERT 92.87%, Optimized RoBERTa 97.45%, Hybrid RoBERTa with transfer learning
98.72% and Proposed SMOTE-based Hybrid RoBERTa with Transfer Learning 99.12%.
The proposed model demonstrates superior accuracy, precision, recall, and
F1-score relative to traditional methods. Consequently, the suggested approach
has addressed the accuracy issues present in traditional hate speech Sentiment
analysis systems. |
|
Keywords: |
Sentiment Analysis, Hate Speech, LSTM, BERT, ROBERTA, Transfer learning, SMOTE |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2025 -- Vol. 103. No. 19-- 2025 |
|
Full
Text |
|
|
|