|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
December 2025 | Vol. 103 No.24 |
|
Title: |
OPTIMISED CUCKOO-INSPIRED DISCRETE SYMBIOTIC ORGANISMS SEARCH STRATEGY FOR TASK
SCHEDULING IN A CLOUD COMPUTING ENVIRONMENT |
|
Author: |
SULEIMAN SAAD, ABDULLAH MUHAMMED, MOHAMMED ABDULLAHI , AZIZOL ABDULLAH , FAHRUL
HAKIM AYOB |
|
Abstract: |
The cloud computing paradigm is rapidly expanding as there is a notable shift
away from other distributed computing approaches and traditional IT
infrastructures. As a result, optimised task scheduling techniques have become
essential for managing the growing complexity of cloud environments. In cloud
computing, numerous tasks must be allocated to a limited number of diverse
virtual machines, aiming to reduce the imbalance between local and global search
spaces while optimising system utilisation. Task scheduling is an NP-complete
problem, meaning there is no exact solution, and we can only achieve
near-optimal results, especially when dealing with large-scale tasks in cloud
computing. This paper introduces an optimised strategy, the Cuckoo-based
Discrete Symbiotic Organisms Search (C-DSOS), for optimal task scheduling in
cloud environments. The method is based on the Standard Symbiotic Organism
Search (SOS), a nature-inspired metaheuristic optimisation algorithm used for
numerical optimisation problems. SOS mimics the symbiotic relationships seen in
ecosystems, such as mutualism, commensalism, and parasitism. The proposed
technique was evaluated using the CloudSim toolkit simulator, and experimental
results showed that C-DSOS outperforms the benchmarked Simulated Annealing
Symbiotic Organism Search (SASOS) algorithm, commonly used in task scheduling.
C-DSOS demonstrated a better convergence rate, especially in larger search
spaces, making it well-suited for cloud task scheduling. Additionally, a t-test
analysis revealed that C-DSOS is statistically significant compared to SASOS,
particularly in scenarios involving larger search spaces. |
|
Keywords: |
Cloud computing, Scheduling, Metaheuristic, C-DSOS, Optimisation |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
EFFICIENT DEEP LEARNING WITH COMPRESSED MUZZLE PRINTS FOR SCALABLE BUFFALO
IDENTIFICATION |
|
Author: |
FATIMAH BINTI KHALID, RANA RANJEET SINGH, NURUL AIN SHAFFERI, TIMUR R AHLAWAT, M
S SANKANUR, ANURADHA AGRAWAL, PRERNA GHORPADEG, SITI KHADIJAH ALI, NURUL AMELINA
NASHARUDDIN, MAS RINA MUSTAFFA |
|
Abstract: |
Traditional livestock identification methods, such as ear tagging and branding,
are constrained by scalability, security, and welfare issues. Biometric
identification, specifically muzzle pattern recognition, is a non-invasive and
tamperproof alternative. Deploying deep learning models for muzzle pattern
recognition is constrained by the need for substantial dataset sizes, storage
requirements and longer training time. This study introduces an approach that
integrates extreme data compression with lightweight deep learning models to
enable efficient buffalo identification. Unlike earlier studies that mainly
focused on classification accuracy, our study provides a thorough examination of
the effects of compression on model performance and storage capabilities. A
dataset of 49GB was compressed using three techniques: Principal Component
Analysis (PCA), K-Means clustering, and JPEG compression, which successfully
reduced the storage size while retaining essential features. PCA demonstrated
the most effective compression, achieving the highest compression ratio
(2585.69×) and shrinking the dataset by 99.86% to 68.6MB while sustaining high
accuracy. Three lightweight deep learning models, MobileNetV2, RegNetY_400MF,
and SqueezeNet 1.1, were trained and evaluated on both compressed and
uncompressed datasets. The results demonstrate that MobileNetV2 and
RegNetY_400MF maintained over 99% accuracy after compression, which suggests
that extreme data reduction does not substantially affect recognition
performance. This study presents one of the initial practical applications of a
compressed biometric dataset for livestock identification. The system was
deployed as a Flask-based web application, MuzzleID, enabling practical usage in
livestock management. Our findings suggest that reducing the size of datasets
significantly enables the widespread implementation of biometric systems in
regions with limited resources while ensuring precise identification at a high
standard. |
|
Keywords: |
Muzzle Recognition, Biometric Identification, Data Compression, Lightweight Deep
Learning, Buffalo Identification, PCA, Mobilenetv2, Livestock Management |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN AND DEVELOPMENT OF HYBRID ALGORITHMS FOR QUANTUM - HIGH PERFORMANCE
COMPUTING ARCHITECTURES |
|
Author: |
Mrs. .B.V. PRASANNA LATHA , Dr. MUKTEVI SRIVENKATESH |
|
Abstract: |
As quantum computing steadily advances within the Noisy Intermediate-Scale
Quantum (NISQ) era, the integration of Quantum Processing Units (QPUs) with
classical High-Performance Computing (HPC) systems emerges as a compelling
strategy to address computationally intensive and complex problems across
various domains. However, the limitations of current quantum hardware — such as
limited qubit counts, decoherence, and gate errors — necessitate hybrid
approaches that combine quantum and classical computational resources in an
intelligent and efficient manner. This paper presents a novel approach to the
design and development of hybrid algorithms that enable dynamic partitioning of
computational workloads across heterogeneous computing subsystems, including
Central Processing Units (CPUs), Graphics Processing Units (GPUs), Artificial
Neural Network (ANN) accelerators, and QPUs. The central goal is to harness the
complementary strengths of these architectures: CPUs for control flow and data
orchestration, GPUs for massively parallel computations, ANN accelerators for
low-latency neural processing, and QPUs for solving specific subproblems such as
combinatorial optimization, quantum simulations, and enhanced kernel
computations in machine learning. We propose a generalized framework for
dynamic task partitioning that analyzes workload characteristics — such as data
dependencies, computational complexity, and hardware affinity — to automatically
distribute tasks to the most appropriate hardware components. Furthermore, we
introduce a hybrid scheduler capable of adapting task allocations at runtime
based on system load and quantum execution latency. To validate our approach, we
present several case studies across diverse application domains: combinatorial
optimization (Max-Cut problem), quantum chemistry simulation (Variational
Quantum Eigensolver), and quantum-enhanced machine learning (Quantum Kernel
Support Vector Machines). Empirical results demonstrate that our hybrid
architecture delivers significant speedup and energy efficiency gains compared
to both conventional classical HPC pipelines and pure quantum or classical
approaches. By providing a systematic framework for hybrid algorithm design,
this work contributes toward making practical hybrid Quantum-HPC systems a
viable paradigm for solving real-world problems as quantum hardware continues to
evolve. |
|
Keywords: |
High Performance Computer(HPC), Quantum Approximate Optimization Algorithm
(QAOA), Quantum Machine Learning (QML),Heterogeneous Computing
Architectures(HCA), Quantum-Classical Workload Scheduling(QCWS) |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
TOWARDS SMART AGRICULTURE: AN OPTIMAL ENSEMBLE DEEP LEARNING SCHEME FOR BANANA
LEAF NUTRIENT DEFICIENCY IDENTIFICATION |
|
Author: |
REKHA.V , DR. UMA SHANKARI SRINIVASAN |
|
Abstract: |
Crop’s Nutrition is highly essential for health conditions at the time of its
growth phase and yield. The lack or shortage of needed nutrients is regraded one
such crucial factor that impacts overall yield of crop. A novel approach for
automated recognition and classification of banana leaf nutrition deficiency is
presented using Ensemble Deep Learning approach. Initially, the input images are
pre-processed, filtered and augmented. It is then segmented using Improved
TGV-FCM (Total generalized variation-Fuzzy C-Means) scheme. From the segmented
image, the features are extracted using Residual GCN (graph convolution network)
followed by selection of suitable optimal features through Enriched Glow swarm
(EGSO) optimization approach. Ranking based Ensemble MobileNet v2 and LeNet
classifier model is used for classification for diagnosing nutrition deficiency
in banana leaf. The validation of proposed model is estimated and are compared
with traditional models in terms of various metrics to exemplify the efficacy of
presented model. |
|
Keywords: |
Banana crop, Nutrition Deficiency Detection, Deep Learning, Residual Graph
Convolution Network, Enriched Glow Swarm Optimization, Ranking based Ensemble
MobileNet v2 and LeNet classifier |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
ORGANIZATIONAL AND ANALYTICAL SUPPORT FOR CONDUCTING EXTERNAL AUDITS OF DATA
PROTECTION SYSTEMS IN INSTITUTIONS |
|
Author: |
MYKOLA MASESOV, OLEG DIEGTIAR, DMYTRO MINOCHKIN2, ANDRII KHAPSALIS, DMYTRO
NOVYTSKYI, SERGII KRAMARENKO |
|
Abstract: |
The main goal of information security is to protect information and related
infrastructure from damage that may arise as a result of loss, alteration, or
unauthorized distortion of information, whether due to criminal attempts or
accidental actions. In such cases, the goal is to minimize the damage that may
be caused by such circumstances. From time to time, it is also advisable to
conduct an independent assessment of the information security in place at the
organization. Such audits work at the level of actual protection and response to
risks that are constantly changing. An audit is, in essence, an indispensable
source of further recommendations for the design and modernization of protection
systems to ensure compliance with standards. This article outlines the main
stages of the methodology for developing an analysis of the effectiveness of
information security measures in organizations, although some of the proposed
indicators require serious refinement, which opens up new opportunities for
further scientific research. |
|
Keywords: |
Independent Information Security Audit, Information Security Auditor,
Information Security Assessment, Cybersecurity Assessment |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF INTELLIGENT ADVISORY SYSTEM WITH COGNITIVE TECHNOLOGY |
|
Author: |
PHANINTORN SUAPRAE, PRACHYANUN NILSOOK, PANITA WANNAPIROON, VITSANU
NITTAYATHAMMAKUL |
|
Abstract: |
Existing research on student retention mainly focuses on risk prediction, with
few studies implementing advisory processes that translate predictions into
timely, personalized interventions. This study develops and evaluates the
intelligent advisory system with cognitive technology (referred to as the IAS-CT
system) to improve student retention in higher education. The persisting gap in
the literature is that most retention studies stop at risk prediction and rarely
operationalize a closed-loop advisory workflow that converts predictions into
timely, personalized interventions. Using de-identified institutional records of
2,973 undergraduates from academic years 2019–2022 with 25 academic and
socio-demographic features, we trained and compared Decision Trees, Logistic
Regression, Random Forest, K-Nearest Neighbors, and Naive Bayes. Preprocessing
comprised imputation, normalization, and categorical encoding/selection;
evaluation used a stratified split and standard metrics (accuracy, precision,
recall, and F1) with confusion matrices. Correlation analysis indicated that GPA
(r = 0.55), absenteeism (r = 0.48), father’s income (r = 0.45), year of study (r
= 0.38), and field of study (r = 0.20) were the most associated factors with
retention. Decision Trees achieved the best predictive performance (accuracy =
98.90%), exceeding Logistic Regression (97.40%), Random Forest (86.10%),
K-Nearest Neighbors (85.90%), and Naive Bayes (85.80%). The selected model was
integrated into an advisory architecture that issues early-warning alerts,
generates personalized study recommendations, and supports advisor–student
communication. An expert panel rated the system’s suitability at an overall high
level. Consequently, the system operationalizes prediction into intervention,
providing actionable retention support with practical implications for data
governance and institutional scaling. |
|
Keywords: |
Cognitive Computing, Educational Data Mining, LINE Official Account, Machine
Learning, |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
REAL-TIME ADAPTIVE TRAFFIC MANAGEMENT USING MACHINE LEARNING AND INTERNET OF
THINGS (IOT) |
|
Author: |
B SARITHA, Dr. KALYANAPU SRINIVAS, PAVAN KUMAR KOLLURU, SIVA KUMAR NALLA, Dr.
DIVVELA SRINIVASA RAO, GARIGIPATI RAMA KRISHNA, Dr. SUVETHA POYYAMANI
SUNDDARARAJ |
|
Abstract: |
This work discusses an adaptive traffic management approach using IoT sensors
and ML to handle urban traffic efficiently. To keep traffic from getting jammed,
improve driving times and boost safety, the system changes traffic lights every
few seconds using data from cameras, LiDAR and radar sensors. A combination of
supervised and reinforcement learning is used to forecast traffic and manage
traffic signals. Simulations found that travel time is down by 29.9%, traffic
flow efficiency goes up by 19.8%, and collisions are reduced by 78.4% with an
adaptive signal system. This system is designed to adapt well to changes in
traffic and weather, providing noticeable enhancements for traffic handling. As
a result of this research, traffic systems in smart cities should become more
efficient, expandable and adaptable, directly reducing city congestion and
improving how safely people travel. The primary contribution of this research
lies in the integration of heterogeneous IoT sensor data with machine learning
models to enable real-time, adaptive traffic management. This contribution
advances current knowledge by demonstrating a scalable framework that
significantly reduces travel time, enhances traffic flow efficiency, and
improves road safety in dynamic urban environments. |
|
Keywords: |
Adaptive Traffic Management, Internet of Things (IoT), Machine Learning, Traffic
Signal Control, Real-Time Optimization, Smart Cities |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID VISION TRANSFORMER-CNN FRAMEWORK FOR AUTOMATED RETINAL DISEASE DETECTION
FROM FUNDUS IMAGES |
|
Author: |
CHANDRA MOULI DARAPANENI, BELLAM SURENDRA BABU, S.V. SATYANARAYANA, L.N.K. SAI
MADUPU, V. SUJAY5, NAULEGARI JANARDHAN, G D K KISHORE, KIRAN KUMAR KAVETI,
RALLABANDI CH.S.N.P.SAIRAM |
|
Abstract: |
Pathologies impacting the retina, such as diabetic retinopathy (DR), glaucoma,
and age-related macular degeneration (AMD), contribute significantly to global
visual impairment and blindness. The prompt and precise identification of these
afflictions through fundus imaging is imperative for facilitating effective
interventions and enhancing patient prognoses. This exploration discloses a
cutting-edge hybrid deep learning framework that integrates Vision Transformers
(ViTs) with Convolutional Neural Networks (CNNs) to enable multi-class
classification of retinal disorders retrieved from fundus imaging. The proposed
system leverages the localized feature extraction proficiencies of CNNs in
conjunction with the global contextual insights provided by transformer-based
attention mechanisms. Publicly accessible datasets, namely APTOS, Messidor, and
EyePACS, constitute the foundation for both training and evaluation processes.
The methodology encompasses sophisticated preprocessing techniques, including
contrast enhancement, optic disc segmentation, and extraction of vascular
features, aimed at augmenting diagnostic accuracy. This hybrid technique attains
a classification accuracy level of 96.3%, in conjunction with a precision
statistic of 95.1%, a recall statistic of 96.7%, and an AUC score of 0.982.
Statistical validation through ANOVA corroborates the model’s outstanding
performance (p < 0.01) when juxtaposed with standalone CNN and ViT
architectures. These outcomes underscore the efficacy of intelligent
vision-based systems in ophthalmic diagnostics and accentuate their significance
within the realms of soft computing, AI-enhanced healthcare, and automated
disease screening technologies |
|
Keywords: |
Retinal Disease Detection, Fundus Imaging, Deep Learning, Convolutional Neural
Networks (CNN), Medical Image Analysis, Diabetic Retinopathy, Age-related
Macular Degeneration. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
AN OPTIMIZED DEEP LEARNING FRAMEWORK FOR ACCURATE AND EFFICIENT FACIAL EMOTION
RECOGNITION |
|
Author: |
MUSHIKA SHYLAJA, PROF. M SHESHIKALA |
|
Abstract: |
This paper presents an end-to-end framework for facial emotion recognition (FER)
that is expressly designed for edge deployment where compute, memory, and power
are limited. Starting from a MobileNetV2 backbone, we integrate hybrid attention
to emphasize semantically meaningful regions (eyes, brows, mouth) while
suppressing background noise, yielding features that are both compact and
interpretable. To address domain shift across datasets and capture conditions,
we adopt a dual-attention domain-adaptation stage that stabilizes
representations without expensive target-label supervision. A landmark-guided
pruning step then removes redundant filters tied to low-saliency areas,
preserving expression-relevant structure while reducing model size. The
resulting pipeline balances four goals often treated in isolation accuracy,
efficiency, robustness, and accountability. Qualitative visualizations show that
attention concentrates on discriminative facial musculature, while stressor
analyses (occlusion, illumination, pose/blur) indicate resilient behaviour. The
operational profile demonstrates real-time throughput with lower latency and
energy per inference, enabling practical deployment on low-power platforms
without retraining. Beyond performance, we discuss fairness and privacy
considerations and outline auditing hooks that make model decisions easier to
inspect. Overall, the framework advances FER from accuracy-only optimization to
a deployable, trustworthy, and resource-efficient solution. |
|
Keywords: |
Facial Emotion Recognition, Attention Mechanisms, Domain Adaptation, Model
Quantization, Edge Deployment |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
MULTIMODAL AI-DRIVEN EARLY DETECTION OF PARKINSON’S DISEASE USING NEURO-MOTOR
BIOMARKERS FROM KEYSTROKE DYNAMICS AND HANDWRITING ANALYSIS |
|
Author: |
MRS. G.MANI , DR.S.V.G.REDDY |
|
Abstract: |
Parkinsons Disease (PD) is an evolving neurodegenerative condition that affects
the fine motor control capabilities and thus early diagnosis is important to
ensure that there is timely intervention. This research paper introduces a
multimodal artificial intelligence (AI) model that combines the keystroke
dynamics and the handwriting analysis in detecting early neuro-motor
abnormalities linked to PD. The supervised machine learning models of Support
Vector Machine (SVM), Random Forest (RF), and Gradient Boosting (GB) were used
to analyze the temporal keystroke dataset (Tappy Keystroke Data) with a maximum
classification accuracy of 96.12%. At the same time, spatial samples of
handwriting (spiral and wave drawings) were handled in the Convolutional Neural
Networks (CNNs), and the resultant accuracy was 86.7 and 83.3 accordingly. The
association of time and space biomarkers shows that multimodal AI analysis may
be more effective than single-modality systems providing a rather inexpensive,
non-invasive, and scalable diagnostic tool to support the detection of PD in its
early stages. The study forms a base on the development of multimodal detection
systems that require the use of other biomarkers including voice and ocular
movement to increase predictive accuracy. |
|
Keywords: |
Parkinson s Disease, Keystroke Dynamics, Handwriting Analysis, Artificial
Intelligence, Neuro-Motor Biomarkers, Machine Learning, Convolutional Neural
Networks, Early Detection. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE SLANG AND LANGUAGE MODELING FOR MINING INFORMAL WEB CONTENT |
|
Author: |
AMBEDKAR KANAPALA, M SANDEEP, JAKKA TEJA, ASHWINI MANIKONDA, BHASKAR MEKALA,S
MAHIPAL |
|
Abstract: |
This graph highlights the accuracy comparison of three models—BERT, GPT, and the
proposed model—across three datasets: Twitter, Reddit, and Mixed-Language. The
proposed model consistently outperforms the baseline models, achieving the
highest accuracy across all datasets due to its adaptive slang and language
modeling framework. On the Twitter dataset, which contains heavy slang usage,
the proposed model achieves an accuracy of 86.1%, significantly higher than BERT
(78.4%) and GPT (80.1%). This result demonstrates the model’s superior ability
to handle informal and slang-heavy language. On the Reddit dataset, which
includes a mix of formal and informal content, the proposed model achieves 84.5%
accuracy, showcasing its adaptability to diverse communication styles.
Similarly, in the Mixed-Language dataset, the proposed model achieves 81.2%
accuracy, highlighting its effectiveness in processing code-switching and
multilingual content. Overall, the results indicate that the proposed framework
excels in handling informal, noisy, and multilingual data, outperforming
traditional models across all tested scenarios |
|
Keywords: |
Adaptive Language Modeling, Slang Detection, Informal Web Content, Noisy Data,
Continual Learning, Self-Supervised Learning, Sentiment Analysis |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
USING ARTIFICIAL INTELLIGENCE FOR DIGITAL FINANCIAL TRANSACTION MONITORING |
|
Author: |
HASSAN ALI AL-ABABNEH, OLEH ONYSHKO2, ANNA PANCHENKO, MARYNA SHASHYNA, OLEKSANDR
BARINOV |
|
Abstract: |
Although numerous studies have addressed fraud detection in financial systems,
there remains a significant gap in the literature regarding validated, scalable
AI-based models and hybrid automation schemes for financial transaction
monitoring. Existing research lacks comprehensive simulation-based evaluations
of transition strategies from human-centric oversight to AI-enhanced processes.
This study addresses this gap by identifying optimal AI models and simulating
hybrid deployment schemes, thereby generating new knowledge for automated
fintech oversight. The following methods were used: focus on the subject of the
study; normalization of an unbalanced empirical dataset using Z-score;
preparation of a training and validation dataset; comparative analysis of models
for detecting anomalies in financial transactions; modelling the risk of
committing fraud during the monitoring of financial transactions. It was found
that AI models, in particular Random Forest (80% Recall, 99.98% Specificity,
99.95% Accuracy, 0.05% Error, κ = 0.842, Time Cost = 26 s), have significant
potential in fintech for detecting fraudulent transactions, and the most
effective approach is a hybrid interaction of AI and specialists. The academic
novelty of the study is the modelled transition from human monitoring of
financial transactions to automation through hybrid schemes, which allows AI to
integrate the best practices of specialists. Further research should focus on
expanding the range of AI models, using raw datasets without PCA, and improving
hybrid interaction schemes in fintech. |
|
Keywords: |
Fintech, Fraudulent Transactions, Fraud Detection, Machine Learning,
Classification, Empirical Dataset, AI Models. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
LINKED MOTION ESTIMATION METHOD BASED FEATURE VECTOR WITH PSO INTEGRATED MEAN
SHIFT TECHNIQUE FOR VIDEO STEGANOGRAPHY |
|
Author: |
SAMEERUNNISA SHAIK , JABEZ J |
|
Abstract: |
Video steganography is the practice of concealing a message in a video file in
order to transmit it invisibly. For secure online and all communication, the
ability to use digital steganography to hide communications is essential.
Security applications rely on steganography, which is embedded in video frames
and films, to protect sensitive information. In order to reduce data transit and
storage requirements, most social media networks use lossy video transcoding.
Most video steganography systems cannot be used for secure social media-based
communication due to the process of video transcoding. In order to establish
dependable secret communication on social media platforms, a robust video
steganography model is proposed using the extracted featured from the video to
counteract the effects of video transcoding. This research presents a novel
method for object-based video steganography, which involves encoding secret data
into the motion vectors of objects in motion. Consequently, the objects present
in each frame are identified using the mean shift technique. The Motion Vector
(MV) of every item is restored to within a quarter of a pixel using a method
that estimates motion from B and P frames. Determining the ideal
storage-to-video-quality ratio for the chosen motion vectors and associating
them with the object requires defining a threshold value. For this reason, we
only take into account the motion vectors whose values exceed the threshold. By
comparing the magnitude that has been achieved to the threshold value, an action
vector is chosen that is larger than the threshold value. For any moving object,
the correct vectors are always chosen. Hence, the small motion vectors that
didn't come from a geographically significant area are removed. The hidden
information is encoded in a quarter of the horizontal and vertical components of
the motion vectors that are being tracked. The proposed model makes use of
Particle Swarm Optimization (PSO) for updating the feature set for accurate
motion detection. In this research a Linked Motion Estimate Method with PSO
Integrated Mean Shift Technique using the Weighted Feature Vector
(LMEM-PSO-MST-WFV) is proposed for video steganography for secure transmission.
The statistics useful for steganography may be affected by changes made to the
motion vector during the information embedding process. The proposed model
performance in region of motion based feature vector for video steganography is
compared with the traditional model and the results represent that the proposed
model performance is high. |
|
Keywords: |
Video Steganography, Lossy Video Transcoding, Motion Estimate Method, Mean Shift
Technique, Feature Vector, Secure Transmission. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
5G AND BEYOND ADVANCING ULTRA LOW LATENCY COMMUNICATION FOR THE NEXT GENERATION
OF AUTONOMOUS SYSTEMS |
|
Author: |
DR. L.K. SURESH KUMAR , DR. A. PRASHANTHI , S. SURYANARAYANA , DR. ROSHINI L ,
JOHN T MESIA DHAS , DR.Y. MADHAVILATHA , P. MUTHUKUMAR |
|
Abstract: |
The arrival of 5G has been a radical shift in the telecom industry, offering
lightning-fast ultra-low latency capabilities crucial for the up-and-coming
waves of autonomous technologies such as self-driving cars, drones, and even
other industrial robots. These systems depend primarily on real-time
communication and decision-making, enabling autonomous and safe operation. Next
Generation Wireless Networks such as 4G and 5G inherit plenty of limitations in
addressing the ultra-low latency needs of dynamic autonomous systems, and this
paper discusses how edge computing, network slicing, AI-remove optimization, and
so on could be an aid to 5G to go ahead in reducing latency and increasing
reliability. Specifically, our experiments show that for our hybrid
communication framework comprising 5G, edge computing, and dynamic network
slicing, we observe a latency of 2.3ms which is notably lower than that of a
basic 4G system (50.1ms) and a basic 5G system (20.5ms). The framework also
obtained 95.2 Mbps throughput, 99.8% reliability, and shortened time to complete
all tasks to 10.2 seconds. The hybrid framework proposed in this article not
only outperforms on several vital metrics but also provides a durable approach
to satisfy the communication demands of autonomous systems − one that can be
further enhanced by optimization of the AI algorithms to determine dynamic
resource allocation, as well as the exploitation of 6G to provide reduced
latency and intelligent communication capabilities. |
|
Keywords: |
5G Technology, Autonomous Systems, Ultra-Low Latency Communication, Edge
Computing, Network Slicing |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF A RULE-BASED ENERGY MANAGEMENT STRATEGY FOR HYBRID ELECTRIC
VEHICLES WITH INTEGRATED HYBRID ENERGY STORAGE SYSTEM |
|
Author: |
SUNKA DIVYA , Dr. D. KIRAN KUMAR |
|
Abstract: |
Electric vehicles present a compelling alternative to conventional
gasoline-powered vehicles, leveraging efficient energy management techniques and
showcasing promising prospects for the continued expansion of Hybrid Electric
Vehicles (HEVs). The energy management system (EMS) holds a crucial role in
HEVs, serving to extend the vehicle's driving range while concurrently lowering
costs. This paper introduces an energy management strategy and optimization
approach for HEVs equipped with a hybrid energy storage system comprising with
Li-ion battery pack and ultracapacitor (UC) pack, implemented through a Fuzzy
Logic Controller (FLC). The primary objectives include enhancing battery
performance metrics such as state of charge, driving range, and battery
lifespan. A parametric comparison is provided with PID controller. Electric
vehicle (EV) powered by Batteries offer high energy density, low environmental
impact, and durable performance. The integration of both batteries and
ultracapacitors is employed to enhance power characteristics. Batteries are used
like a primary energy source because these are having high energy density and
Ultra capacitors are used for high power density. Simulink serves as a tool for
simulating the proposed system's performance, aiding in the determination of the
most suitable intelligent system for achieving power efficient operation of the
HEV. A comparative analysis of the results from both control techniques is
conducted, ultimately recommending the Fuzzy controller for effective energy
management in electric vehicles. |
|
Keywords: |
Energy storage systems (ESS), Fuzzy logic controller (FLC), Hybrid Electric
Vehicle (HEV), Integrated system, Ultracapacitor (UC). |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
ANALYSING HUMOUR AND SENTIMENT IN FAMILY GUY DIALOGUE USING NLP AND MACHINE
LEARNING |
|
Author: |
PRATHIMA TIRUMALAREDDY , ESWAR REDDY KETHIREDDY , SIRISHA ALAMANDA , LAKSHMI
SREENIVASA REDDY DIRISINAPU |
|
Abstract: |
Trends and sentiments in the dialogue of Family Guy, a famous animated
television show were analyzed. Using the Linear Discriminant Analysis (LDA)
machine learning algorithm, prevalent topics and sentiments in the dialogue are
identified. The relationship between humour in the dialogue and season-wise
ratings is examined to provide insight into what makes Family Guy appealing to
its audience. Results suggest that the show's humour, use of pop culture
references, and social commentary contribute to its success. Additionally,
sentiment analysis reveals that the show's humour is often characterized by
sarcasm and irony. Four lexicons are used to identify positive and negative
sentiment in the Family Guy dialogue. Three of the lexicons, Affin, Bing, and
NRC, are existing, while the fourth lexicon is curated to analyse humour. The
sentiment of the dialogue in each dataset is analysed, and results show that the
sentiment varies depending on the characters and situations involved. The show
often uses humour to address serious social issues. A comprehensive analysis of
the dialogue in Family Guy and its implications for media and society is
provided. |
|
Keywords: |
Sentiment Analysis, Dialogue, Lexicon, Humour, Linear Discriminant Analysis-LDA,
Family Guy Dataset |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE HYBRID VISION TRANSFORMER FOR PRECISE SEGMENTATION OF MANGO LEAF
DISEASES |
|
Author: |
NURUL AKMAR AZMAN, FATIMAH KHALID, PUTERI SUHAIZA SULAIMAN,ZAINAL ABDUL KAHAR |
|
Abstract: |
Early and accurate detection of leaf diseases is critical for sustaining crop
health and preventing yield loss in modern agriculture. However, existing deep
learning segmentation models often struggle to balance local feature extraction
with global contextual understanding. This study introduces an adaptive hybrid
architecture that integrates U-Net’s precise localization with a Vision
Transformer (ViT) encoder enhanced by adaptive positional embeddings, which
dynamically preserve spatial relationships and adapt to varying leaf
morphologies. A tailored preprocessing pipeline combining context-aware
brightness normalization and HSV-based background removal further improves
symptom visibility under diverse lighting conditions. Using the MangoLeafBD
dataset (2,500 annotated images across five classes) to validate the framework,
the proposed model achieves 99.71% accuracy and a mean IoU of 0.973, surpassing
existing methods by 0.78–8.13% while maintaining efficiency on mid-range GPUs.
This study establishes a scalable benchmark for agricultural image segmentation,
advancing the architecture of hybrid transformer with CNN integration for
field-level disease detection. |
|
Keywords: |
Vision Transformers, U-Net, Semantic Segmentation, Adaptive Positional
Embeddings |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI-HOP COMMUNICATION IN WIRELESS SENSOR NETWORK |
|
Author: |
RAKESH RANJAN, VAISHALI SINGH, HITENDRA SINGH |
|
Abstract: |
In this research, the author proposes a novel multi-hop communication framework
for Wireless Sensor Networks (WSNs) aimed at enhancing clustering efficiency and
securing data transmission against intruder-based attacks. The core contribution
lies in an optimized Cluster Head (CH) selection algorithm combined with a
lightweight Intrusion Detection Mechanism (IDM), ensuring both energy efficiency
and secure communication. Simulations were conducted using NS-3 (Network
Simulator-3), where 150 sensor nodes were deployed over a 100×100 m² area.
Performance was evaluated over 500 rounds using multiple performance metrics.
The proposed Secure Energy-aware Multi-hop Adaptive Clustering (SEMAC) model
demonstrated a 28.6% improvement in clustering efficiency compared to the
traditional LEACH protocol, along with a 17.4% reduction in energy consumption
per node per round. In terms of security, SEMAC achieved a 96.2% intrusion
detection rate, a false positive rate of only 3.5%, and an attack detection
accuracy of 95.8%, highlighting its robustness against common WSN attacks. The
model converged efficiently within 80 training epochs, reducing the overhead of
repeated clustering. The prediction module for node behaviour and trust scores
exhibited low error margins, with a Mean Absolute Error (MAE) of 0.046 and a
Mean Squared Error (MSE) of 0.0089. Additionally, the system maintained a high
Packet Delivery Ratio (PDR) of 93.7%, ensuring reliable data transmission even
under security threats and energy constraints. These results validate the
effectiveness of the SEMAC model in enhancing the operational lifespan,
reliability, and resilience of WSNs, making it suitable for deployment in
mission-critical IoT and environmental monitoring applications. |
|
Keywords: |
Wireless Sensor Networks, Clustering Efficiency, Intrusion Detection Mechanism,
SEMAC, MAE, MSE, NS-3. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
APPLICATION OF THE DEMPSTER SHAFER METHOD IN AN EXPERT SYSTEM TO DETECT ONLINE
GAME ADDICTION |
|
Author: |
MARYANA, NURDIN, TAUFIQ, NURUL WILDA, MUHAMMAD FIKRY, ARNAWAN HASIBUAN |
|
Abstract: |
The increasing popularity of online games among college students has given rise
to various problems, one of which is online game addiction. The impacts of
online game addiction include academic, social, and psychological aspects in
students. The purpose of this study is to build an expert system to diagnose
online game addiction using Dempster Shafer, using questionnaire data
distributed to 264 students as a dataset and 211 students as training data. The
Dempster Shafer method uses expert confidence values to determine how much a
symptom can influence the likelihood of a case occurring. Research on this
expert system used 16 symptoms and 3 levels of addiction: mild addiction,
moderate addiction, and severe addiction. The results of the study using 53 test
data showed the Dempster Shafer method has an accuracy rate of 73%, resulting in
39 moderate addictions. With this system model, it is hoped that it can provide
information and solutions for people who have experienced addiction so that
prevention and recovery can be carried out to avoid fatal consequences. |
|
Keywords: |
Online Games, Expert System, Dempster Shafer Method, Game Addiction Symptoms. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
AUDIO-VISUAL DEEPFAKE DETECTION WITH CROSS MANIPULATION EVALUATION ON
FAKEAVCELEB |
|
Author: |
REHAM MOHAMED ABDULHAMIED, MONA M. NASR, FARID ALI MOUSA, SARAH NAIEM |
|
Abstract: |
Deepfake technologies have advanced rapidly in recent years, enabling highly
realistic manipulations of both audio and video. While such technologies offer
creative potential, they also present major risks in misinformation, fraud, and
privacy violations. This paper explores the problem of detecting deepfake
content using the FakeAVCeleb dataset, which provides both authentic and
manipulated audio-video samples. We present experiments using state-of-the-art
audio models (AASIST, RawNet2, ECAPA-TDNN), video models (Vision Transformers
and SyncNet/Wav2Lip for lip-sync consistency), and multimodal fusion approaches.
In particular, we evaluate robust detection under cross-manipulation scenarios,
where models are tested on manipulation types unseen during training. Our
results highlight the performance drop in cross-manipulation settings,
emphasizing the importance of robust multimodal fusion. Fusion methods achieved
improved generalization, indicating that combining complementary cues across
modalities is key to resilient deepfake detection. |
|
Keywords: |
Deepfake Detection, FakeAVCeleb Dataset, Audio-Visual Forensics,
Cross-Manipulation Evaluation, Video Manipulation, Audio Deepfake. Visual
Deepfake Introduction. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
MRI BASED ALZHEIMER’S DISEASE DETECTION AND STAGING USING ATTENTION-GUIDED
HYBRID ENSEMBLE LEARNING |
|
Author: |
VINUKONDA EMANUEL RAJU , B.N. JAGADESH |
|
Abstract: |
The prompt and accurate identification of Alzheimer s Disease (AD) is essential
for guiding effective treatment strategies. However, this task is challenging
due to the similar structural brain features present across different stages of
the disease and the uneven distribution of classes in medical imaging datasets.
This study introduces a hybrid deep learning framework aimed at improving the
multi-class classification of Alzheimer's disease based on MRI scans. The
technique begins with image preparation utilizing CLAHE and elastic deformation
to enhance contrast and variability. Brain structures are then delineated using
an Attention U-Net, enabling the model to emphasize anatomically significant
regions. Feature collection is obtained from the EfficientNetV2B0 architecture
and subsequently subjected to dimensionality reduction through CatBoost based
feature selection, which retains the most significant properties. A Deep Neural
Network (DNN) utilizing restructured attention is trained on a balanced dataset,
augmented through SMOTE to rectify class imbalance. The final prediction is
generated by integrating outputs from CatBoost, XGBoost, and the DNN through an
ensemble meta-learner utilizing attention-based integration. Experimental
validation on the OASIS dataset achieved an accuracy of 95.47%, exceeding
current benchmark models. The suggested method exhibits significant
generalization and sensitivity to minority classes, underscoring its potential
applicability in clinical diagnostic procedures. |
|
Keywords: |
Alzheimers Disease Classification, Attention U-Net, EfficientNetV2B0, Ensemble
Learning, MRI Brain Imaging. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
A DEEP TREE LIQUID STATE AND GREEDY STACKED AUTOENCODER FRAMEWORK FOR THYROID
DISEASE PREDICTION |
|
Author: |
Mrs. R. VANITHA , Dr. K. PERUMAL |
|
Abstract: |
Thyroid disease is a medical state of affairs in which the thyroid gland
potentiality to be in action is affected. The thyroid gland being an endocrine
organ positioned in front of neck that induces thyroid hormones and advances via
the bloodstream to aid control a variety of other organs. On one hand when body
induces too much thyroid hormone, hyperthyroidism is said to be developed
whereas on the other hand insufficient hormone production results in
hypothyroidism. Deep Learning (DL) algorithms, though analyze data in an
automatic manner and constructing analytical models according to DL can also be
utilized for thyroid disease diagnosis. Feature extraction is essential in DL to
improve efficiency where features are extraction based on their significance. It
means that the most important elements of the DL model have been extracted for
prediction by discarding irrelevant features that have nothing to do with the
disease prediction. In this work a method called Deep Tree Liquid State and
Greedy Stacked Autoencoders (DTLS-GSA) based feature extraction for thyroid
disease prediction. The DTLS-GSA based feature extraction for thyroid disease
prediction is split into two sections, namely, efficient data handling and
feature extraction. First, data handling employs thyroid raw datasets are
performed using Deep Tree Liquid State Machine based architecture. The objective
of employing the Deep Tree Liquid State Machine is the potential in handling
data that are highly skewed without the loss of original information. Second,
with efficient handling of data, Greedy Layerwise-Stacked Denoising Sparse
Autoencoders is applied to extract the features required for thyroid disease
detection. Also hereby employing both forward propagation (i.e., inputting
sample instances) and backward propagation (i.e., fine tuning weights) therefore
minimizes loss function convergence at an early stage, therefore improving
overall precision and recall rate of extracted features. Experimental evaluation
will be carried out using Thyroid Disease dataset with different factors such
as, precision, recall, accuracy and training time. |
|
Keywords: |
Thyroid Disease Prediction, Deep Learning, Deep Tree Liquid State Machine,
Greedy Layerwise, Stacked Denoising, Sparse Autoencoders |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
AN HHO OPTIMIZED CNN VIT FRAMEWORK FOR CERVICAL CELL CLASSIFICATION |
|
Author: |
REMYA R , KUMUDHA RAIMOND |
|
Abstract: |
Accurate identification and classification of cervical cells is crucial for the
early identification and treatment of cervical cancer. In image classification,
the ability to capture both local texture and global contextual features is
crucial for improving model performance. Convolutional Neural Networks (CNN) are
effective in extracting local features using hierarchical filters whereas Vision
Transformers (ViT) are good at capturing long range dependencies using
self-attention mechanisms. This study presents a hybrid method that combines the
features extracted by ResNet50 and ViT architectures to integrate their
complementary strengths. The process incorporates Harris Hawks Optimization
(HHO) to extract discriminative features and reduce their dimensionality.
Finally, an SVM was used to classify the optimized features. The method was
evaluated on two benchmarked datasets, SIPaKMeD and Mendeley, achieving
impressive accuracies on both datasets. The results demonstrate that the hybrid
method significantly enhances cervical cell classification and can support more
reliable clinical decisions in cervical cancer detection. |
|
Keywords: |
Cervical cancer, ResNet 50, ViT, Harris Hawks Optimization (HHO), SVM |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
A BIOMETRIC AND SECURITY REINFORCED TECHNIQUE WITH HYBRID DEEPFAKE DETECTION |
|
Author: |
VENKATESWARLU SUNKARI, A. SRINAGESH |
|
Abstract: |
Biometric authentication systems, such as facial recognition and voice
identification, are increasingly used for secure access across devices and
services. However, the rapid advancement of deepfake technology—powered by
generative models like GANs and autoencoders—poses a significant threat by
enabling realistic synthetic media that can spoof biometric systems. These
manipulated inputs can bypass traditional security mechanisms, undermining the
reliability of identity verification. This paper proposes a Hybrid DeepFake
Detection Framework to reinforce biometric security by integrating spatial,
temporal, frequency-based, and physiological signal analysis. The system
leverages Convolutional Neural Networks (CNNs) for image-based forgery
detection, Long Short-Term Memory (LSTM) networks for temporal consistency, and
Fourier Transform techniques for frequency-domain anomaly detection.
Additionally, it incorporates micro-expression and remote photoplethysmography
(rPPG) analysis for detecting liveness. Experiments conducted on benchmark
datasets—FaceForensics++, Celeb-DF, and DFDC—demonstrate that the hybrid model
achieves superior performance compared to baseline models, with an accuracy
exceeding 92% and real-time inference capabilities. The system also exhibits
strong generalization to unseen deepfake methods and resilience against
adversarial perturbations. Compared to existing single-model approaches, our
hybrid system significantly improves detection accuracy, robustness, and
deployability, providing a reliable solution to counter evolving deepfake
threats in biometric authentication systems. |
|
Keywords: |
Biometric Authentication, Deepfake Detection, Hybrid Framework, Convolutional
Neural Networks , FaceForensics++ |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
DP-MCR: A BIO-INSPIRED MULTI-CONSTRAINT ROUTING PROTOCOL FOR DELAY-RESILIENT AND
SECURE COMMUNICATION IN AGRICULTURAL IOT NETWORKS |
|
Author: |
V. S. JAGADEESWARAN , DR. K. SANTHI |
|
Abstract: |
The Internet of Agricultural Things (IoAT) has become central to precision
farming, yet routing in these networks remains constrained by limited energy,
unstable connectivity, and dynamic traffic conditions. To address these
challenges, this research proposes Danaus Plexippus-Based Multi-Constraint
Routing (DP-MCR), a biologically inspired framework that adapts routing
behaviour through natural migration strategies of monarch butterflies. DP-MCR
integrates swarm-based clustering to balance communication loads,
gradient-driven constraint weighting to refine routing decisions, and
nectar-inspired path optimization to select stable, energy-rich relays. A
gliding-based multi-hop mechanism minimizes retransmissions, while
migration-guided load redistribution and self-healing strategies maintain
resilience in the face of node failures and congestion. Real-time,
quality-of-service-aware scheduling further ensures the timely delivery of
critical agricultural data. Simulation analysis demonstrates that DP-MCR reduces
energy consumption by over 30%, lowers the average delay by nearly 20%,
maintains packet delivery ratios above 78%, and improves throughput beyond 290
kbps, while achieving the lowest packet loss across varying densities. By
embedding adaptive swarm intelligence into routing decisions, DP-MCR delivers an
energy-aware, delay-resilient, and fault-tolerant solution tailored to the
operational requirements of agricultural ecosystems in 2025. |
|
Keywords: |
IoT, IoAT, Agriculture, WSN, Routing, Optimization, Swarm Intelligence. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
BLOOD-BASED BIOMARKERS FOR EARLY PREDICTION OF KNEE BONE CANCER (EPKC): A
PREDICTIVE MODELING TECHNIQUE FOR OSTEOSARCOMA AND CHONDROSARCOMA DIAGNOSIS |
|
Author: |
G.JANAKI, D.UMANANDHINI |
|
Abstract: |
Knee bone cancers such as Osteosarcoma and Chondrosarcoma are challenging to
diagnose in the early stages because their symptoms often resemble those of
non-cancerous conditions like Osteoarthritis. In this study, nine blood based
biomarkers – CRP, IL-6, TNF-α, COMP, MMP-3, VEGF, ESR, LDH, and ALP – were used
in a machine learning based diagnostic model to classify patients into four
categories: Chondrosarcoma, Osteosarcoma, Normal, and Misclassified Cancer. The
model was trained on a dataset of 5119 patient records using a Random Forest
Classifier optimised through hyperparameter tuning. It is being hypothesized
that a biomarker-driven, hyperparameter-optimized Random Forest would
distinguish between Osteosarcoma, Chondrosarcoma, and normal cases at a very
high accuracy. It achieved a classification accuracy of 0.95. Feature important
analysis identified LDH, ALP, and CRP as the most predictive biomarkers.
Receiver operating characteristic (ROC) analysis further confirmed strong class
separability, with an area under the curve (AUC) of 0.95. The dataset used in
this study was delivered from clinical data between collected between 2022 and
2025, containing blood biomarker profiles with confirmed diagnostic labels.
These findings demonstrate the potential of integrating multi-biomarker data
with artificial intelligence to support early and non-invasive detection of knee
bone cancers, which could aid clinicians in timely diagnosis and treatment
planning. |
|
Keywords: |
Osteosarcoma, Chondrosarcoma, Random Forest Classifier, Hyperparameter tuning,
Non-invasive Diagnosis. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
PERFORMANCE ASSESSMENT OF NHPP SOFTWARE RELIABILITY MODELS APPLYING SHAPE
PARAMETER OF WEIBULL-TYPE LIFE DISTRIBUTIONS |
|
Author: |
HYO JEONG BAE |
|
Abstract: |
This study was conducted with the aim of evaluating the performance properties
issue of NHPP-based software reliability models parameterized by the shape
parameter of the Weibull-type life distributions. For this purpose, failure time
data collected from actual software fault occurrences were utilized, and the
model parameters were estimated using the MLE approach. The reliability
performance of the proposed models was then assessed through several analytical
approaches, including model efficiency evaluation using MSE and R², the degree
of accuracy in prediction against observed values using m(t), instantaneous
failure rate analysis by λ(t), and reliability prediction based on future
mission times. The results revealed that the Inverse-Exponential model
demonstrated superior efficiency compared with the other models. Accordingly,
this study provides a novel analysis of performance issues in reliability models
governed by the shape parameter of the Weibull distribution, an area
insufficiently addressed in prior studies, and offers basic reliability
properties that can be helpful to developers in the preparation stage. |
|
Keywords: |
Exponential, Inverse-Exponential, NHPP, Rayleigh, Property Assessment. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID DEEP LEARNING APPLICATION FOR SOLAR IRRADIANCE FORECASTING MODEL |
|
Author: |
SAUMYA MISHRA , DEEPENDRA PANDEY, SAURABH BHARDWAJ |
|
Abstract: |
Secure scheduling, cost-effective power system operations, and the reduction of
technical and financial risks for the electricity market all depend on accurate
and trustworthy solar irradiance forecasting (SIF). This paper introduces a new
hybrid, data-driven deep learning scheme called SIF. Various geographical and
meteorological datasets are preprocessed, and then redundant variables are
removed using feature selection (FS) and Grey wolf optimization
(GWO-VMD)-enhanced variational mode decomposition. The refined data used for
predicting the subsequence of solar irradiance by the BiLSTM-Attention network.
Lastly, kernel density estimation for Gaussian kernel (KDE-Gaussian) is used to
define prediction intervals for different confidence levels. The comparison
shows that the FS-GWO-VMD-BiLSTM-Attention scheme performs better, reducing
MAE/MAPE/MSE by 93%, 86%, and 98%, respectively to BPNN model. With narrower
range, higher coverage, and deeper information of interval, the KDE-Gaussian
approach produces outstanding coverage–width criterion (CWC) results that
precisely reflect fluctuations in SI. |
|
Keywords: |
Solar Irradiance Forecasting, Bidirectional Long Short-Term Memory, Data-Driven
Modeling, Interval Forecasting, Attention Mechanism |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
APPLICATION OF CLOUD COMPUTING TECHNOLOGIES IN DIGITAL MARKETING MANAGEMENT OF
COMPANIES |
|
Author: |
NOOR AHMAD ALKHUDIERAT, HASSAN ALI AL-ABABNEH , HAMZA MOHAMMED MASHAQBEH , FAYEZ
M. AL-KHAWALDEH , JAMEEL AHMAD KHADER , JAFAR ABABNEH |
|
Abstract: |
The active spread of cloud computing technologies is radically changing the
organization of digital marketing in companies. With large volumes of data,
multichannel communication and the need for prompt personalization, traditional
infrastructures are no longer effective. A scientific and practical problem
arises: how to objectively assess the impact of cloud solutions on digital
marketing management. The purpose of this study is to develop and test a
methodology for assessing this impact and, based on it, identify the main
effects of cloud technologies. The research methodology is based on the
construction of a system of digital marketing management performance indicators
(data processing speed, infrastructure costs, conversion rate, return on
marketing investments) and their comparison before and after the implementation
of cloud solutions. Index analysis and normalization of indicators for data
comparability were used as a tool; the sample was made up of ten international
companies that implemented cloud marketing platforms in 2021-2024. Data was
collected on the basis of public corporate reporting and expert interviews;
calculations were performed using descriptive statistics and benchmarking
methods. The obtained results show that cloud computing increases flexibility
and speed of decision-making, reduces costs and improves key indicators of
marketing activities, which confirms the feasibility of their use. The developed
methodology can be used by managers and researchers for systematic evaluation of
the effectiveness of cloud technologies in digital marketing. |
|
Keywords: |
Cloud Computing, Digital Marketing, Marketing Process Management, IT System |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
SECURING HEALTHCARE RECORDS: A BLOCKCHAIN-BASED APPROACH FOR DATA INTEGRITY AND
ACCESS CONTROL |
|
Author: |
IKRAM BEN ABDEL OUAHAB , LOTFI ELAACHAK , YASSER ALLUHAIDAN , AND MOHAMMED
BOUHORMA |
|
Abstract: |
The management of medical records presents significant challenges in the
healthcare sector, with traditional centralized systems plagued by
vulnerabilities such as cyberattacks, unauthorized access, and data breaches.
These issues compromise the confidentiality, integrity, and accessibility of
sensitive patient information, further exacerbated by the increasing complexity
and volume of healthcare data. This paper proposes a blockchain-based framework
leveraging Ethereum’s decentralized architecture and the InterPlanetary File
System (IPFS) for distributed storage to address these challenges. Smart
contracts, implemented in Solidity, automate critical processes such as patient
registration, access control, and record updates, ensuring traceability and
eliminating intermediaries. Sensitive medical data is encrypted before storage
on IPFS, with only cryptographic hashes stored on-chain to reduce storage
overhead while maintaining data integrity. Decentralized authentication
mechanisms, such as Ethereum wallets and cryptographic keys, enhance security
and usability. This approach empowers patients with full control over their data
and real-time access management. The system’s scalability, transparency, and
patient-centric design provide a robust alternative to traditional models,
paving the way for advancements in healthcare data management and beyond. |
|
Keywords: |
Blockchain Technology, Medical Record Management, Data Security, Smart
Contracts, InterPlanetary File System (IPFS) |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
CONSENSUS LEARNING WITH PATHOPHYSIOLOGY-INFORMED SPECTRAL FEATURES FOR EEG-BASED
STROKE AND TBI DIAGNOSIS |
|
Author: |
GAO JIALU , NORWATI MUSTAPHA, NORIDAYU BINTI MANSHOR , RAZALI BIN YAAKOB |
|
Abstract: |
Traumatic brain injury (TBI) and stroke are distinct yet critical neurological
conditions requiring timely and accurate diagnosis. While EEG-based binary
classification models often exceed 90% accuracy, clinically relevant three-class
classification (TBI vs. stroke vs. normal) remains a significant challenge, with
current state-of-the-art methods achieving only 69% accuracy and 0.84 ROC-AUC.
This study introduces a novel consensus learning framework that incorporates
pathophysiology-informed frequency band decomposition to improve classification
performance. Three specialized spectral configurations are employed: a
TBI-optimized delta band (0.5–4 Hz), a stroke-sensitive alpha band (8–13 Hz),
and a cross-pathology band covering conventional clinical frequencies. Identical
TMN-CNN architectures are trained on each spectral view, and an adaptive
consensus mechanism integrates predictions via class-specific weighting and
confidence-based voting. Evaluated on 9,276 EEG segments from the Temple
University Hospital EEG Corpus, the proposed method achieves 76.57% balanced
accuracy and 0.911 ROC-AUC, outperforming benchmarks by +0.071 ROC-AUC.
Class-specific analysis confirms balanced performance across categories, with a
+3.22% gain over the best individual model. These results demonstrate the
effectiveness of pathophysiology-guided, multi-view consensus learning for
robust EEG-based neurological classification. |
|
Keywords: |
EEG Classification, Traumatic Brain Injury, Stroke Detection, Consensus
Learning, Multi-Modal Signal Integration) |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE DEEP LEARNING FRAMEWORK FOR ROBUS QR CODE WATERMARKING WITH NOISE-AWARE
PRESERVATION |
|
Author: |
SHIKHA YADAV, SHILPA SHARMA, DR. RAHAMA SALMAN, RUBEENA M. AKHTAR, SAYYADA SARA
BANU |
|
Abstract: |
Digital watermarking has emerged as an essential technology for copyright
protection and content authentication in the era of exponential digital content
proliferation. Quick Response (QR) codes, with their inherent error correction
capabilities and high information capacity, represent promising watermark
carriers that enable machine-readable authentication. However, existing
watermarking methods face fundamental challenges in simultaneously achieving
robust watermark survival against diverse attacks while maintaining
imperceptible embedding quality. Traditional transform-domain techniques suffer
from limited adaptability and manual parameter tuning, while recent deep
learning approaches lack specialized mechanisms for preserving watermark
integrity during noise removal and exhibit insufficient robustness under
sophisticated attacks. To address these limitations, this paper proposes a novel
deep learning framework for QR code watermarking with three key innovations: (1)
an adaptive strength encoder that automatically modulates embedding intensity
based on local image characteristics to optimize the imperceptibility-robustness
trade-off, (2) a dual-detection denoising module featuring independent watermark
and noise detectors that selectively preserve embedded information while
suppressing various noise types, and (3) a multi-scale quality-aware decoder
with parallel feature extraction paths and confidence estimation for robust QR
reconstruction under severe distortions. Extensive experiments on challenging
datasets demonstrate superior performance with QR extraction accuracy exceeding
96%, watermark PSNR above 37 dB, and denoising quality surpassing 31 dB,
substantially outperforming state-of-the-art approaches. |
|
Keywords: |
Digital Watermarking, QR Codes, Deep Learning, Noise-Aware Denoising, Adaptive
Embedding |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI-SCALE FEATURE FUSION IN 3D CNNS FOR AORTIC ANEURYSM DETECTION IN
ANGIOGRAPHY SCANS |
|
Author: |
Dr.B.KALPANA, Dr.N.SUBHASH CHANDRA, Dr.K.SELVAM, DR.T.VENGATESH |
|
Abstract: |
Abdominal Aortic Aneurysm (AAA) is a life-threatening condition characterized by
the pathological dilation of the aorta. Early detection through Computed
Tomography Angiography (CTA) scans is crucial for preventing rupture, but manual
screening is time-consuming and subject to inter-observer variability. While 3D
Convolutional Neural Networks (CNNs) are a natural fit for volumetric medical
image analysis, they often struggle with the significant scale variation of
AAAs, which can range from subtle, localized dilations to extensive, tortuous
structures spanning large volumes. This paper introduces MS-FFNet3D, a novel
deep learning architecture designed to overcome this challenge through an
advanced multi-scale feature fusion strategy. MS-FFNet3D is built upon a 3D
encoder-decoder backbone integrated with a Multi-Scale Fusion (MSF) module. This
module employs a combination of atrous spatial pyramid pooling (ASPP) and dense
connections to extract and aggregate features at multiple receptive fields,
enabling the network to capture both fine-grained contextual details for precise
boundary delineation and global contextual information for overall aneurysm
presence. Evaluated on a curated dataset of 358 CTA scans, our model achieved a
Dice similarity coefficient of 91.4% for aneurysm segmentation and an AUC of
0.984 for detection, significantly outperforming standard 3D U-Net (Dice: 87.2%,
AUC: 0.942) and V-Net (Dice: 88.1%, AUC: 0.951) baselines. The proposed method
demonstrates high sensitivity in detecting small, nascent aneurysms, offering a
powerful tool for automated, large-scale screening and precise measurement in
clinical settings. |
|
Keywords: |
Abdominal Aortic Aneurysm, Computed Tomography Angiography, 3D Convolutional
Neural Networks, Multi-Scale Feature Fusion, Medical Image Segmentation,
Computer-Aided Diagnosis. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
FBA SHIELD: ENHANCING PRIVACY PRESERVING VULNERABILITY MANAGEMENT FOR SECURITY
BY DESIGN IN MEDICAL IOT ECOSYSTEMS |
|
Author: |
J. S. REXON, P. MANIKANDAN, C. SWEDHEETHA |
|
Abstract: |
The rapid development of Internet of Things (IoT) in the medical industry has
greatly enhanced patient monitoring and diagnosis, although it has also
introduced a new and unexplored scale of cybersecurity risks. The traditional
methods of vulnerability management in medical IoT ecosystems do not necessarily
work well in offering a high level of protection and stringent privacy
protection as they are founded on centralized designs that expose sensitive
medical data to leakage, manipulation, and attacks. To address these weaknesses,
this paper proposes an integrated architecture, FBA SHIELD (Federated
Blockchain-Assisted SHIELD), to provide privacy-sensitive vulnerability
management based on the security-by-design principles. The suggested model is a
federated learning model which is applied to do decentralized anomaly detection,
and which is enhanced with blockchain-based encryption and consensus in order to
provide a secure model update and prevent data-poisoning. It has been used to
implement the framework in Python using the TensorFlow and blockchain libraries
and tested on the IoT Medical Devices Cybersecurity Dataset in Kaggle, in order
to be reviewed comprehensively against the baseline models. Empirical findings
indicate that FBA SHIELD is 97.8% accurate, 96% precise, 95% recall, 96%
F1-score, and an AUC of 98.5%, and 4.4 points higher than high-end approaches
with a lower computational cost. The blockchain layer also showed a throughput
(TPS) of 245 transactions per second and an average latency of 1.8 seconds,
which guaranteed efficiency and reliability in the decentralized healthcare
settings. In this study FBA SHIELD, which combines federated learning with
blockchain to further secure and privacy-sensitive vulnerability management of
medical IoT networks. It also brings fresh knowledge on decentralized trust
verification, collaborative model training, and optimized efficiency in anomaly
detection, which can be used to scale up to privacy-conscious healthcare
cybersecurity. The changing attack vectors are also resilient to the system
because it has a feedback loop to the model, which is updated in real-time,
thereby giving the system adaptive scaling. FBA SHIELD may be used in remote
healthcare, smart medical devices, and distributed hospital networks, and the
security of sensitive patient information is extremely significant in these
scenarios. Its good performance is justified by its potential as an innovative
solution, and FBA SHIELD is an effective and deployable system of medical
cybersecurity IoT of the new generation. |
|
Keywords: |
Blockchain Security, FBA SHIELD, Federated Learning, Medical IoT Cybersecurity,
Privacy-Preserving Vulnerability Management |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
MITIGATING ADVERSARIAL STYLOMETRY USING LEF-HT AND C2S-CGReLUNN FOR MULTI-AUTHOR
WRITING STYLE ANALYSIS |
|
Author: |
RIYA SANJESH, Dr. PAMELA VINITHA ERIC |
|
Abstract: |
The analysis of multi-author writing style is necessary to identify the
individual author from collaborative text. Most of the prevailing works in
research literature to overcome the adversarial stylometry, have paid less or no
attention to punctuation usage and geometric shapes of the handwritten text.
Therefore, in this work, Cutmix-Shake-Shake Convolutional Gaussian-error
Rectifier Linear Unit Neural Network (C2S-CGReLUNN) classifier-based
multi-author identification is implemented. Initially, the handwritten images
are collected and pre-processed. Later, the edges are detected using the Lanczos
Kernel-based Canny Edge Detector (LK-CED). Next, the geometric patterns are
analyzed using Logarithmic Exponential Fit Hough Transform (LEF-HT). Further,
the stroke movement is detected using the Reflect Padding-based Fast Fourier
Transform (RP2FT). Next, the stylometric features are extracted. Meanwhile, the
text is recognized, and from this, the punctuation-based features are extracted,
followed by the calculation of punctuation density. Also, the vocabulary
richness is identified using the Entropy-based Spatial Indexing Bidirectional
Encoder Representation from Transformers (ESI-BERT) technique. Finally, the
extracted features, punctuation density of the author, and vocabulary richness
identified output are given to the C2S-CGReLUNN for author classification. Thus,
the multi-author writing style is analyzed effectively with a prediction
accuracy of 99.07%. |
|
Keywords: |
Multi-Author Writing Style Analysis, Stylometric Features, Deep Learning,
Natural Language Processing (NLP), Geometric Character Analysis, Stroke Movement
Identification, and Edge Detection, Text Recognition, Convolutional Nueral
Network |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
AI-ENHANCED GREEN COMPUTING FRAMEWORK FOR REDUCING DATA CENTER ENERGY FOOTPRINT |
|
Author: |
R. SREE CHAITRA, M. HEMANTH, K. SRIKANTH, RAVI KUMAR TATA1, YELISELA RAJESH |
|
Abstract: |
The rapid escalation of digital media and the continuous pursuit of high-speed
computation have highlighted critical shortcomings in conventional image
processing paradigms, particularly concerning scalability, power efficiency, and
environmental responsibility. Traditional approaches rely heavily on
resource-demanding operations, which contribute significantly to energy usage
and carbon emissions, thereby conflicting with the global agenda of sustainable
digital ecosystems. Recent research has begun to emphasize sustainable computing
directions in visual data analysis, targeting both algorithmic efficiency and
hardware optimization to minimize energy overheads. Yet, insights from
eco-conscious design in image processing remain fragmented and underexplored.
This work addresses the pressing requirement for environmentally aligned image
analysis by reviewing existing practices, examining their limitations, and
identifying the sustainability trade-offs inherent in current solutions.
Furthermore, we propose an optimized framework for feature extraction and
interpretation that balances computational capability with reduced energy
expenditure. Experimental assessments against conventional baselines confirm the
framework’s ability to deliver reliable processing outcomes while achieving
notable reductions in energy demand. In addition, this work contributes a
unified, data-driven framework integrating key green-computing
techniques—virtualization, cooling optimization, hardware–software efficiency
improvements, and renewable-energy utilization—to systematically reduce energy
consumption in data centers. Unlike studies that investigate these techniques
independently, this study presents a combined optimization approach supported by
real energy-consumption modeling. A hybrid machine-learning model
(SVM+RF+XGBoost) is also developed to accurately predict data-center energy
usage, enabling proactive energy-management strategies. Experimental results
demonstrate improved prediction accuracy and a measurable reduction in overall
energy demand, providing new insights into sustainable and environmentally
conscious data-center operations. |
|
Keywords: |
Sustainable Image Processing, Energy-Conscious Algorithms, Green Computation,
Low-Power Visual Analytics, Resource-Aware Systems |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING POWERED HYBRID FRAMEWORK FOR MULTICLASS HISTOPATHOLOGY BASED
CANCER DIAGNOSIS |
|
Author: |
PRIYANKA KHABIYA, PROF(DR.)FIROJ PARWEJ |
|
Abstract: |
This work introduces an explainable deep learning approach for classifying lung
and colon cancer using histopathological images. Recent improvements in deep
learning have significantly enhanced medical image analysis; however, most
existing models still suffer from poor interpretability, limited transparency,
and weak generalization across tissue types. To address these gaps and enhance
clinical trust, this study proposes a transparent and reliable deep learning
framework for cancer diagnosis. The framework influences a custom Convolutional
Neural Network architecture, enhanced by the Swish activation function to
enhance merging and representation learning. This architectural selection
enables better feature extraction and convergence compared to traditional
activation functions, resulting in an improved overview across histopathological
patterns. To improve transparency and model responsibility, explainability
techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM) and
Integrated Gradients (IG), are employed, providing visual insights into the
regions that contribute to model predictions. Evaluated on a standard
histopathology dataset, the proposed model achieves high classification
performance with 98.69% accuracy, 98.71% precision, 98.69% recall, and 98.70%
F1-Score. The visual attribution maps confirm that the network attends to the
diagnostically major tissue regions, reinforcing its reliability in medical
image analysis. The significant impact of this study is that it demonstrates the
potential of combining an optimized CNN architecture with explainable AI methods
to improve diagnostic accuracy and interpretability concurrently. The findings
propose that combining advanced CNN design with an explainability technique can
bring both accuracy and interpretability in digital pathology applications. |
|
Keywords: |
Deep Learning, Histopathology, Lung Cancer, Colon Cancer, SegNet, EfficientNet,
VGG16, Explainable AI, Grad-CAM, Integrated Gradients |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
Title: |
KNOWSENTMIX: LEVERAGING CONTEXTUAL EMBEDDINGS AND KNOWLEDGE GRAPHS FOR DEEPER
SENTIMENT UNDERSTANDING IN CODE-MIXED TEXT |
|
Author: |
MANYAM THAILE , CHIN-SHIUH SHIEH , DR. M. NAGABHUSHANA RAO , DR. S. K. RAJESH
KANNA , DR. RAMARAJU NAGARJUNA KUMAR , DR C MADHUSUDANA RAO , DR. HRUSHIKESH
JAIWANT JOSHI , DR. U D PRASAN , RAMARAO NAYAPAMU |
|
Abstract: |
KnowSentMix is a hybrid sentiment-reasoning framework for code-mixed social text
that selectively combines multilingual contextual representations with compact
commonsense knowledge. The method uses an agreement-gated alignment mixture that
(i) retrieves and encodes concise knowledge snippets, (ii) grounds concepts to
tokens through optimal-transport–based soft alignment, and (iii) fuses text-only
and knowledge-only predictions via a gate driven by uncertainty and inter-expert
agreement. The system returns a sentiment label together with compact
explanatory evidence—top concepts and, when available, trigger spans—improving
interpretability with modest computational overhead. Objectives, scope,
mathematical formulation, and algorithmic design are accompanied by a full-mode
implementation using XLM-R with ConceptNet retrieval. Comparative efficiency
measurements and ablation studies indicate that adaptive knowledge gating
delivers stable gains on idiomatic, implicit, and noisy code-mixed inputs while
preserving efficiency. |
|
Keywords: |
Code-Mixed NLP, Sentiment Analysis, Commonsense Knowledge, Conceptnet, COMET,
XLM-R, Optimal Transport, Interpretability, Knowledge-Infused Learning,
Multilingual Transformers. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2025 -- Vol. 103. No. 24-- 2025 |
|
Full
Text |
|
|
|