|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
September 2025 | Vol. 103 No.17 |
|
Title: |
AN ARTIFICIAL INTELLIGENCE SERVER DESIGN AND THE ROLE OF THREADS IN PARALLEL
PROCESSOR APPLICATIONS |
|
Author: |
DURGA PRASAD NOWDU, NAVYA KAILASAM, DIVESH SINGH SAI, B SARITHA, BANDARU SATYA
LAKSHMI, CH.SABITHA, DR ARUN KUMAR UNDAMATLA, CH. CHANDRA MOHAN |
|
Abstract: |
With the growing demand for high-performance Artificial Intelligence (AI)
applications, the design of efficient server architectures has become critical.
Modern AI workloads—such as training deep learning models and processing
large-scale data—require significant computational resources, which can be
effectively met using distributed computing environments composed of
high-performance workstations interconnected by fast networks. To ensure
usability, such distributed systems must provide a single-system image,
abstracting the complexity of the underlying hardware from end users. In
building AI applications for these environments, threads offer a more efficient
and practical alternative to traditional processes. Threads have lower overhead
for creation, context switching, and communication, making them ideal for
implementing fine-grained parallelism in multi-core and multi-processor systems.
By utilizing threads, AI servers can efficiently distribute computation across
multiple processors, improving throughput and responsiveness. This paper
presents a prototype AI server design and investigates how multithreading
enhances the performance of AI applications running on parallel processing
architectures. |
|
Keywords: |
Server Design, Threads, Parallel processor, AI Driven, Computing |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
A NOVEL IMAGE-INVARIANT FEATURE EXTRACTION USING SLIDING WINDOW FOR MACHINE
LEARNING |
|
Author: |
SRAVAN KIRAN VANGIPURAM , RAJESH APPUSAMY |
|
Abstract: |
Feature extraction is an important task in building machine learning and deep
learning applications. A gray-level cooccurrence matrix, a histogram of oriented
gradients, a local binary pattern, principal component analysis, and linear
discriminant analysis are some of the feature extraction methods that are used
in a lot of research studies. However, these methods are sensitive to image
quality, have trouble with non-linear relationships, are difficult to compute,
and cannot capture global or contextual information. These limitations often
require additional preprocessing or modifications to enhance their performance
in practical applications. The goal of this study is to come up with a feature
extraction method that works with all kinds of image changes and can pick up
both local and global features to make machine learning classifiers work better.
To achieve this, we propose in this research a new feature extraction method
that is based on the concept of a sliding window to extract local and global
image-invariant features. For evaluating the proposed method, we have used the
chest X-Ray medical images from the publicly available Novel COVID-19 Chest
X-Ray Repository dataset at Kaggle. We conducted experiments using five
benchmark feature extraction methods and eight state-of-the-art machine learning
classifiers to assess the significance of the proposed feature extraction. For
binary classification, the tests indicated that MLP had better accuracy, recall,
precision, specificity, and balanced accuracy than other methods (96.25% for
accuracy, 96.05% for recall, 92.4% for precision, 96.34% for specificity, and
96.19% for balanced accuracy). The dense MLP neural network, which has two
hidden layers with 1024 and 512 neurons each, was able to correctly classify
with a 93.98% accuracy, 92.07% recall, 93.21% precision, and 95.32% specificity.
It also had a balanced accuracy of 93.69% when it came to multiclass
classification. |
|
Keywords: |
Sliding window, Feature extraction, Local patterns, Global patterns,
Hyperparameters. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING BLOCKCHAIN ANONYMITY USING SECURE MACHINE LEARNING APPROACH WITH ONION
AND GARLIC ROUTING IN HEALTHCARE |
|
Author: |
RAKSHIT KOTHARI , KALPANA JAIN |
|
Abstract: |
The research explores an innovative approach to enhancing blockchain anonymity
by integrating onion and garlic routing mechanisms with deep learning
techniques, specifically Long Short-Term Memory (LSTM) and Gated Recurrent Unit
(GRU) neural networks. The study addresses growing concerns about privacy and
traceability in blockchain transactions by developing a hybrid system that
combines the encryption strengths of onion and garlic routing with the
predictive capabilities of recurrent neural networks. Our experimental results
demonstrate that this integration significantly enhances transaction privacy
while maintaining optimal system performance. The proposed model achieved a
94.7% success rate in obscuring transaction origins and destinations, with a 37%
improvement in routing efficiency compared to conventional methods. This work
provides a promising framework for privacy-focused blockchain applications in
secure communication systems and healthcare. |
|
Keywords: |
Onion Routing, Garlic Routing, Long-Short Term Memory, Gated Recurrent Unit,
Healthcare, Anonymity, Deep Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
GEOSPATIAL INTELLIGENCE AND MACHINE LEARNING FOR FOREST CONSERVATION: AN
INTEGRATED APPROACH |
|
Author: |
MOHAMED ATIFI , LAHCEN ZIDANE , AZIZ BOUJEDDAINE , SARA RIAHI , HAMID KHALIFI |
|
Abstract: |
When thinking about forest degradation, it is tempting to see it simply as the
loss of trees. However, the reality is much more complex. Degradation does not
only reduce forest cover, it triggers a chain reaction, disrupting ecosystems,
accelerating climate change, and affecting the livelihoods of communities that
depend on forest resources. Although a substantial body of research exists,
we noticed that many studies tend to treat deforestation, wildfires, and illegal
logging as separate issues. From our perspective, this fragmented approach often
overlooks how these problems are interconnected and how they amplify each other.
Traditional assessment methods also present limitations. They usually rely on
qualitative observations and have not yet fully integrated the possibilities
offered by Geographic Information Systems (GIS) and machine learning. This gap
partly explains why risk prediction and the development of targeted conservation
strategies remain so challenging. In light of these observations, we propose
an integrated framework that combines environmental and socio economic
prioritization through the Analytic Hierarchy Process (AHP), spatial analysis
using GIS, and predictive modeling based on machine learning techniques. Our
main contribution lies in bridging fragmented approaches by offering a unified
decision-support method that leverages spatial analysis and predictive modeling
for more accurate and actionable forest risk assessment. Rather than replacing
existing approaches, our objective is to complement them and offer
decision-makers practical tools to anticipate degradation risks more
effectively. We believe that bridging the gap between research and action is
crucial for achieving meaningful, long term forest conservation outcomes. |
|
Keywords: |
Forest Degradation; Climate Impact; Ecosystem; Wildfires; Deforestation;
Multi-Criteria Methodology; Analytic Hierarchy Process (AHP); Geographic
Information Systems (GIS); Artificial Intelligence (AI); Machine Learning. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
REDESIGNING THE KNOWLEDGE OF ROBOTIC SYSTEM ENGINEERS THROUGH THE
MODEL-PARAMETRIC SPACE OF INFORMATION TECHNOLOGY |
|
Author: |
OLEKSIY RYKHALSKYY |
|
Abstract: |
The relevance of the topic is driven by the rapid development of digital
technologies and the growing complexity of designing robotic systems, which
requires new approaches to the representation, structuring, and verification of
knowledge. The need to integrate heterogeneous models and parameters into a
single information environment determines the importance of the study for
industrial and scientific applications. The purpose of the study is to
investigate methods of representing the knowledge of robotic system designers
based on the model-parameter space and to describe the possibilities of their
practical use. The object of research is the process of formalizing knowledge
and integrating models in the design of complex technical systems. The
methodological basis of the study is a systematic approach, semantic modeling
methods, interval analysis, and theoretical set operations on knowledge
neighborhoods. As a result of the work, the concept of building a
model-parameter space (<M,P>) was formed, a classification of basic concepts and
categories in the field of design was developed, and formal measures of
compatibility, integrity, and completeness of knowledge were proposed.
Algorithms for constructing knowledge neighborhoods and integrating models into
holistic methodologies with automated consistency checking have been developed.
The structure of the tool and software complex to support the processes of model
analysis and synthesis in robotic systems is presented. The practical
significance of the results lies in the possibility of using the proposed
approaches in the development of digital engineering platforms, reducing the
risk of design errors, increasing the adaptability of design solutions, and
reducing the development time of complex technical systems. |
|
Keywords: |
Model, Parameter, System, Analysis, Design, Research, Complex Object, Knowledge
Base, Consistency, Heterogeneous Models, Information Technology |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
TECHNOLOGY ORGANIZATION ENVIRONMENT MODEL OF DIGITAL TRANSFORMATION IN PUBLIC
ACCOUNTANT PROFESSION FOR GOVERNANCE ENHANCEMENT |
|
Author: |
BAMBANG LEO HANDOKO , SARAH JOHANA MAGDALENA SIREGAR , MUHAMMAD FAIZ ATTHALLAH |
|
Abstract: |
In today s digital era, digital transformation has expanded beyond industrial
sectors and is increasingly penetrating public accounting firms. This
transformation is believed to enhance governance by improving efficiency and
effectiveness. This study aims to examine the factors influencing digital
transformation adoption using the Technology-Organization-Environment (TOE)
framework. A quantitative method was employed, and primary data was collected
through questionnaires distributed to auditors working at public accounting
firms. Data analysis was conducted using Structural Equation Modeling with
Partial Least Squares (SEM-PLS) via SmartPLS version 4. The results indicate
that technological, organizational, and environmental factors significantly and
positively influence digital transformation adoption. Furthermore, digital
transformation adoption is found to have a significant and positive effect on
governance enhancement. |
|
Keywords: |
Accountant, Digital, Transformation, Governance, Enhancement |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AI-ENABLED SYSTEM FOR IDENTIFYING AND MANAGING INAPPROPRIATE OFFENSIVE CONTENT
DETECTION AND MODERATION |
|
Author: |
S. SEETHALAKSHMI , Dr. B. BALAKUMAR |
|
Abstract: |
This research tackles the important problem of offensive data detection and
classification using deep learning methods within the context of social
networks. The end goal of this research is to create an artificial intelligence
(AI) system that can identify inappropriate data in many forms of media, such as
text, audio, and images. The system can detect potentially harmful items by
using technologies such as Optical Character Recognition (OCR), Google Text to
Speech (GTTS), and Natural Language Processing (NLP). Problems plaguing this
area of study at the moment include small datasets and biased model results.
Using conventional metrics like Accuracy, Precision, Recall, and F1-measure, the
research conducts an exhaustive review to determine the effectiveness of various
techniques. According to the results, deep learning models, and the Recurrent
Neural Network (RNN) architecture, in particular, are very effective. By
facilitating the early detection and prevention of cyberbullying, our study
contributes substantial new information towards the goal of creating safer and
more inclusive societies. Skillfully extracting data from social media
platforms. Our models outperform prior research in identifying and classifying
foul language thanks to our use of sophisticated preprocessing techniques and
meticulous hyperparameter adjustment. |
|
Keywords: |
Offensive data, Artificial Intelligence, Deep Learning, Optical Character
Recognition, Google Text to Speech, Natural Language Processing, Recurrent
Neural Network |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
SARTIFICIAL BEE COLONY-DRIVEN FEATURE SELECTION EMPOWERING NAIVE BAYES
CLASSIFIER FOR SUPERIOR CORONARY ARTERY DISEASE PROGNOSISASE |
|
Author: |
JAYAMOL P. JAMES , Dr. GNANAPRIYA S. |
|
Abstract: |
The rise in mortality due to Coronary Artery Disease (CAD) underscores the
urgent need for accurate, early-stage detection systems. Traditional
classification models often struggle with imbalanced datasets, irrelevant
features, and limited interpretability, hindering clinical reliability. This
study introduces an optimized hybrid diagnostic model Firefly Swarm
Optimization-based Decision Tree (FSO-DT)designed to overcome these limitations.
The approach leverages the bio-inspired characteristics of firefly swarms to
perform robust feature selection, reducing dimensionality and enhancing
classification relevance. The refined feature set is subsequently processed
through a decision tree classifier, selected for its transparency and clinical
interpretability. The proposed model was evaluated using a benchmark CAD
dataset, assessing performance through key metrics including accuracy,
sensitivity, specificity, and F1-score. Results indicate significant
improvements compared to conventional classifiers and unoptimized decision
trees. This optimization not only improves classification performance but also
ensures model efficiency by minimizing computational overhead. The FSO-DT model
delivers an interpretable and scalable solution, capable of aiding clinicians in
risk assessment and decision-making processes. The bio-inspired nature of the
algorithm ensures adaptability across diverse clinical scenarios, making it a
promising tool in medical data analytics. This work contributes to intelligent
healthcare systems by integrating evolutionary computation with interpretable
learning, fostering advancements in early diagnosis and patient-centered
outcomes.. |
|
Keywords: |
Coronary Artery Disease, Firefly Swarm Optimization, Decision Tree,
FeatureSelection, Medical Diagnosis, Classification Accuracy |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP-LEARNING MODEL FOR BREAST TUMOR IDENTIFICATION USING A HYBRID APPROACH WITH
PRE-TRAINED MODELS |
|
Author: |
RAMI YOUSEF |
|
Abstract: |
Globally, breast cancer is among the most common types of cancer affecting
women, and early detection can save lives. In case of breast cancer, early
detection improves the chances of a favorable prognosis since prompt treatment
can be commenced. Despite the lack of access to specialist physicians, machine
learning facilitates early diagnosis of breast cancer. With the rapid
advancement of machine learning, especially deep learning, the medical imaging
community is increasingly interested in using these methods to improve cancer
detection accuracy. Nevertheless, significant deficiencies persist in the
existing literature: primary challenges encompass the restricted availability of
extensive, balanced, and meticulously annotated datasets; complications
regarding model generalizability and external validation; and enduring class
imbalance that may result in false negatives and affect clinical reliability.
Moreover, there is an absence of established standards and challenges in
comparing studies owing to different datasets and evaluation processes. Many
deep learning models exhibit robust performance on internal test sets; however,
they frequently struggle to sustain accuracy when evaluated on external,
heterogeneous data, which raises questions regarding their practical
applicability. A limited amount of data is available on diseases. The objective
of this study is to present a deep learning model for identifying and
classifying breast cancer. Kaggle Open-Access Database breast histopathology
images were used to assess the system's performance. Several quantitative
criteria are used to evaluate the efficacy of the suggested strategy, including
accuracy, precision, recall, and the F1 score. All previous models fail to match
the performance of the ensemble model, composed of VGG16, EfficientNetB7, and
Xception. As result of the study, the accuracy, precision, recall, and F1-scores
for invasive ductal carcinoma (IDC) are all 95.1%, 96.5%, and 95.77%
respectively. The results emphasize the strength and dependability of our
innovative methodology in automating breast cancer classification, surpassing
previous research endeavors, optimizing diagnostic procedures, and ultimately
helping to preserve the lives of patients. |
|
Keywords: |
MRI Spine Hemangioma Segmentation, U-Net Model, Convolutional Neural Network
(CNN), Semantic Segmentation, Precision, Dice Coefficient, Accuracy. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AN EFFICIENT AND EXTREME LEARNING MACHINE FOR AUTOMATED DIAGNOSIS OF BRAIN TUMOR |
|
Author: |
SAYEEDAKHANUM PATHAN, NIKHIL TEJA GURRAM, P. KAVITHA, V NEELIMA, B. SRINIVAS,
CHUNDURI LAVANYA A ,BHUPESH DEKA |
|
Abstract: |
An efficient automated system is needed to detect the tumor affected and
non-effected region of human brain. Though there are traditional models which
are already available for the said purpose, But they are facing the problem of
excessive segmentation, greater time consumption, and high error rate and over
fitting problem.The suggested method with the aid of preprocessing procedures
enhances the contrast and quality of the input MRI images. From the
pre-processed images, both texture and statistical characteristics are taken out
for the subsequent classification process. Further with the aid of the Rapid
Sine Cosine Swarm Optimisation method, dimensionality reduction has been
accomplished. This has lowered training times and improved model accuracy. To
precisely classify healthy and non-healthy images extreme learning algorithm is
employed. Finally, to locate the exact tumor affected part of the image auto
encoder-based segmentation is used. The proposed model is executed on the BRATS
dataset and assessed utilizing a range of performance metrics, including
accuracy, recall, F1-score, and precision, which yielded results of 98.93%,
99.21%, 97.67%, and 96.17% respectively, based on the training dataset. |
|
Keywords: |
Magnetic Resonance Imaging, Preprocessing, Rapid Sine Cosine Swarm Optimization,
Dimensionality Reduction, Auto-Encoder |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTIVE DEEP REINFORCEMENT LEARNING-DRIVEN TEST CASE PRIORITIZATION FOR HIGHLY
VOLATILE SOFTWARE ENVIRONMENTS |
|
Author: |
SRINIVASA RAO KONGARANA, ANANDA RAO A, RADHIKA RAJU P |
|
Abstract: |
Test execution needs change frequently due to software updates, making fixed
test case prioritization unreliable. Unnecessary test runs and delays in finding
faults occur because static methods do not adjust when new defects appear.
Reordering test cases dynamically based on changing conditions can improve
prioritization. A model that updates test rankings using past execution
patterns, uncertainty estimation, and retraining triggers is introduced in this
study as Adaptive Deep Reinforcement Learning-Driven Test Case Prioritization
(ADRL-TCP). Long Short-Term Memory (LSTM) combined with a Double Deep Q-Network
(DDQN) stores test effectiveness across multiple cycles. Test cases are ranked
based on their likelihood of revealing failures using Bayesian uncertainty
estimation. Prioritization is updated without restarting the entire learning
process through incremental retraining. In continuous integration environments,
the model is tested and compared to Reinforcement Learning for Test Case
Prioritization (RL-TCP) and Prioritized Experience Replay Based on Dynamics
Priority (PERDP). Finding failures requires fewer test executions, and faults
are detected earlier using ADRL-TCP. Past outcomes and recent changes influence
test rankings due to the memory-based learning approach. High-risk test cases
receive earlier execution because prioritization is adjusted based on
uncertainty awareness. Reducing test overhead while maintaining fault detection
consistency in continuous testing environments is possible through ADRL-TCP,
which adjusts as software evolves. Risk-aware ranking, adaptive retraining, and
learning-based memory integration improve test prioritization, as observed in
these findings. |
|
Keywords: |
Test Case Prioritization, Deep Reinforcement Learning, Continuous Integration,
Uncertainty Estimation, Adaptive Learning, Automated Retraining, Software
Testing, Fault Detection. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING QUALITY OF SERVICE FOR ROUTING IN IOT USING THE PROPOSED DEEP BELIEF
LION OPTIMIZATION (DBLO) ALGORITHM |
|
Author: |
R.YANITHA, DR.M.LOGAMBAL |
|
Abstract: |
Modern communication networks have been completely transformed by the Internet
of Things (IoT), which makes it possible for a wide range of devices to exchange
data efficiently. But maintaining the best possible Quality of Service (QoS) in
IoT networks—especially in cluster-based architectures—remains a major
difficulty. In order to improve QoS metrics in IoT routing, this study presents
the Deep Belief Lion Optimization (DBLO) algorithm, a novel combination of the
Lion Optimization and Deep Belief Networks (DBN). Modern techniques such as Ant
Colony Optimization (ACO), Krill Herd Algorithm, and Convolutional Lion Routing
Optimization (CLRO), which blends LOA with Convolutional Neural Networks (CNN),
are compared to the DBLO algorithm. Critical QoS metrics, including throughput,
energy consumption, routing overhead, packet delivery ratio (PDR), and
end-to-end delay, are used to gauge performance. Results from experiments show
how effective the DBLO algorithm is at maximizing network performance, cutting
down on energy use, and guaranteeing dependable data delivery. The suggested
method opens the door for more effective and scalable IoT networks by providing
a solid solution for QoS improvement in IoT. |
|
Keywords: |
Internet of Things (IoT),Quality of Service (QoS), Deep Belief Lion Optimization
(DBLO),Lion Optimization Algorithm (LOA),Deep Belief Networks
(DBN),Cluster-Based IoT Network, End-to-End Delay, Packet Delivery Ratio
(PDR),Routing Overhead, Throughput, Energy Consumption |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
HOW CAN MULTIMODAL DEEP LEARNING DETECT VIOLENCE IN LOW-RESOURCE SINHALA SOCIAL
MEDIA POSTS? |
|
Author: |
U DIKWATTA , TGI FERNANDO |
|
Abstract: |
Social media is an integral aspect of daily life, yet it also serves as an
avenue for disseminating violent content. Image-based posts, in particular,
serve as powerful vehicles for such content, prompting efforts to automatically
detect and combat this issue. While studies predominantly focus on English
content, fewer explore languages with limited digital resources, such as
Sinhala. Our research pioneers the classification of Sinhala image posts based
on both textual and visual features and introduced a foundational work in the
context of a low resource language, Sinhala by adapting standard models for this
underexplored domain. As the first step, we introduced a new annotated dataset
of Sinhala image posts, with extracted textual and visual components.
Furthermore, we employed a customized Tesseract OCR and the EAST library for
automatic extraction of these components. After pre-processing the textual data,
we utilized models, including LSTM, GRU, 1D CNN, and ensemble models combining
1D CNN with GRU, LSTM, BiGRU, and BiLSTM, alongside word2vec embeddings.
Additionally, we incorporated XLM-R, a multilingual transformer-based model, to
enhance textual classification. For visual analysis, pre-trained models such as
ResNet, AlexNet, ViT, and EfficientNet were employed. Finally, we employed an
SVM-based late fusion multimodal model for classification. Our approach achieved
95% accuracy, surpassing single-modality results. This marks significant
progress in addressing violent content in Sinhala image posts on social media.
Although architecture employs standard models, adopting these models to Sinhala
language and the creation of a baseline dataset marks a significant
contribution. |
|
Keywords: |
Deep Learning, Multimodal, NLP, Social Media, Violence |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ADVANCING CERVICAL CANCER DETECTION AND EARLY DIAGNOSIS WITH RESNEXT-BASED
TRANSFER LEARNING |
|
Author: |
G.V.SURESH , GVS RAJU , HARI K. MADIRAJU , MOHAMMED SAJEENA |
|
Abstract: |
Cervical cancer is the fourth leading cause of cancer-related deaths among women
worldwide, with mortality rates projected to rise significantly by 2030. Despite
being highly preventable and treatable when detected in early stage. Previous
studies have compared DL-based models with conventional screening methods;
however, their performance in detecting early-stage cervical cancer has been
poor. This work introduces a new DL-based method for cervical cancer detection
using a colposcopy dataset, which involves three simple stages: image
enhancement, feature selection, and classification. Image enhancement methods
are first used to improve the quality of colposcopy images, thus making feature
extraction better, followed by feature selection using simulated annealing to
identify the most relevant features for classification. The features are then
used to train a ResXNet classifier, achieving an accuracy of 98%. The proposed
model is compared with state-of-the-art deep learning models, including VGG16
and ResNet-50, the results demonstrate that the proposed model as achieved
improved performance in terms of accuracy and efficiency. |
|
Keywords: |
Cervical Cancer, ResNeXt, VGG16, ResNet-50, Transfer Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AI-ENHANCED ENERGY MANAGEMENT FOR OPTIMIZING RENEWABLE ENERGY INTEGRATION IN
SMART CITIES |
|
Author: |
T. SUKANYA , CHITTURI SUGUNALATHA , B VINAY KUMAR , A. MUTHUKRISHNAN , GUDISA
ROHINI , SALA SUREKHA , ARUNA VIPPARLA |
|
Abstract: |
This paper proposes an artificial intelligence-based energy management system
implemented on the Internet of Things in smart cities to optimize the amount of
renewable energy used in a smart city, reduce costs, and improve stability in
the grid. This system combines machine learning methods (LSTM and SVM),
Mixed-Integer Linear Programming (MILP) optimization, and reinforcement learning
(RL) to predict energy generation and storage, as well as for balancing load in
the grid. We validated the results with real data and proved that our
model reduced energy costs by 12%, increased the use of renewable energy by 10%,
and improved energy balance by 2.3%. Also, grid stability was enhanced by a
66% decrease in failures and 50% in outage periods. Although the system
demonstrated potentially successful outcomes, it relies on data quality and
computational power. Future efforts will prioritize improving prediction
accuracy using up-to-date weather data and expanding the system to encompass
larger urban areas. Building systems have an excellent scope for reliable,
energy-efficient, and sustainable energy management in smart cities,
enabling innovative and eco-friendly urban infrastructure. |
|
Keywords: |
AI-Driven Energy Management, Smart Cities, Renewable Energy Optimization,
Machine Learning, Grid Stability |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
PREDICTION OF THE INDONESIA STOCK EXCHANGE COMPOSITE WITH TIME-SERIES EXTERNAL
AND TECHNICAL FACTORS USING ARTIFICIAL NEURAL NETWORK |
|
Author: |
ADITYA EKA PARAMASATYA , VIANY UTAMI TJHIN |
|
Abstract: |
Indonesia Stock Exchange Composite is an indicative of the movement of all
stocks traded on Indonesia stock exchange. Investor uses this to measure the
level of capital gains and benchmark the investment portfolio performance.
Accurate prediction of stock market composite remains a challenging task,
particularly in emerging markets where external economic factors such as
interest rates of central bank, world oil prices, inflation rates, and USD/IDR
exchange rate exert significant influence. Existing research in prediction
models using artificial neural network often focus on specific stocks and
overlook the combined influence of macroeconomic and technical indicators. This
research addresses the gap by exploring artificial neural network models to
predict the stock exchange composite price by integrates composite technical
indicators with external macroeconomic factors. Cross-industry standard process
for data mining uses as framework to guide the development. Secondary
time-series data period 2019 to 2024 of each factors taken. Empirical results
demonstrate that the Neural Net model achieve high predictive accuracy and
outperform Deep Learning model. The findings highlight the effectiveness of
combining macroeconomic and technical factors in data mining processing with
artificial neural network for composite price forecasting and offer practical
implications for investors and analysts in emerging markets. |
|
Keywords: |
IDX Composite, Stock, Artificial Neural Network, Artificial Intelligence,
CRISP-DM |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
NEUROEXPLAINAI: AN EXPLAINABLE AI AND STATISTICAL FRAMEWORK FOR BRAIN TUMOR
DIAGNOSIS AND SEVERITY PREDICTION USING MULTIMODAL MRI WITH NEUROFUSIONNET |
|
Author: |
V.HARI PRASAD , DR. T. BHASKAR2, SUKANYA LEDALLA , M. VARAPRASAD RAO , A.
HARSHAVARDHAN |
|
Abstract: |
The correct diagnosis and prediction of malignancy in brain tumors are critical
for neuro-oncology, as they directly influence clinical decision-making.
Although deep learning models have had notable success in tumor classification
and segmentation based on MRI data, most existing approaches are limited in
three aspects: building on imaging modalities only, disregarding clinically
relevant metadata, and lacking interpretability because of non-integrated
explainable AI (XAI). To overcome these limitations, we present NeuroExplainAI,
an explainable deep learning framework for a holistic brain tumor diagnosis and
grading. We present NeuroFusionNet, a dual task architecture to fuse deep CNN
features with hand crafted radiomic descriptors and patient-level clinical
metadata in the form of data-attention. This allows for classification (HGG
versus LGG) and severity scoring to be performed concurrently. For decisor
transparency, both spatial and channel-level explanations are included using
Grad CAM++ and SHAP. The model is trained and tested on BraTS 2021 dataset with
98.34% accuracy, 97.73% F1-score and MAE=0.38 for severity prediction. This
paper provides novel insights into the clinical interpretability of multimodal
fusion and attention-based weighting, in addition to its effect on the
predictive performance. Ablation study and comparisons with state-of-the-arts
demonstrate the necessity and effectiveness of each component. The incorporation
of explainableAI techniques builds trust and improves usability in clinical
workflows, making NeuroExplainAI an appealing platform for reliable,
interpretable, and individualized brain tumor assessment. |
|
Keywords: |
Brain Tumor Diagnosis, Multimodal MRI, Explainable AI, Severity Prediction, Deep
Learning Framework |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AI-BLOCKCHAIN HYBRID SMART CONTRACT MODEL: FRAUD DETECTION AND IMMUTABLE RECORD
KEEPING IN INSURANCE |
|
Author: |
POORANI MITHILA S , GURUPANDI M , PRIYANKA K , SOLAIPRIYA S |
|
Abstract: |
The insurance sector is being transformed through the combination of artificial
intelligence (AI) and blockchain technologies. This study proposes the
AI-Blockchain Hybrid Smart Contract Model (AIBSCM), which combines AI-based
fraud detection with blockchain-based smart contracts to allow for automated
insurance claim processing. A synthetic dataset of 1,000 insurance claims was
used to train a random forest model, which achieved 92% accuracy on training
data; however, real-world testing revealed difficulty in detecting fraudulent
claims from under-represented categories. A blockchain simulation was conducted
to demonstrate the secure storage and automated execution of claims, with smart
contracts giving transparency and immutability. The architecture integrates
decentralised oracles, zero-knowledge proofs (ZKPs), federated learning, and a
DAO governance mechanism to provide a privacy-conscious, decentralised, and
robust solution for the insurance business. Subsequent study will look at
real-world deployment and integration with regulations. The integration of these
technologies seeks to address traditional insurance systems' issues, such as
data privacy concerns and a lack of transparency. By investigating real-world
deployment and regulatory compliance, this model has the potential to transform
the insurance business by delivering a safe and efficient method for dealing
with false claims. This innovative method has the potential to boost client
trust while also streamlining insurance company operations. Overall, the
combination of blockchain and privacy-conscious technology might result in
increased reliability and a transparent insurance sector. |
|
Keywords: |
Artificial Intelligence, Blockchain, Smart Contracts, Transparency, Claim
Processing. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
OPTIMIZING ENERGY-EFFICIENT ENCRYPTION IN IOT: EXPLORATION OF STREAM AND
LIGHTWEIGHT BLOCK CIPHERS |
|
Author: |
DR.T.VENGATESH , DR.Y.J.NAZEER AHMED , DR.VITHYA GANESAN , N.KIRUBAKARAN,
DR.A.JYOTHI BABU , DR. P.G.SURAJ , R.DEEPAK , D.SUNANTHA |
|
Abstract: |
The massive proliferation of the IoT devices has posed intricate problems in the
protection of data delivery within limited computational resource. Such
environments are usually very demanding to conventional encryption protocols.
The paper fills an urgent knowledge gap because it critically compares
lightweight block ciphers and stream ciphers to provide an energy-efficient
encryption in the IoT-networks. Although the available studies tend to consider
performance or security as a key aspect, our study is unique in that it combines
the aspects of the throughput, latency and energy consumption of a system to
give a multi-dimensional assessment. We can characterize the actual tradeoffs
and applicability of the three modes, CTR, OFB, and CFB by deploying them over a
real-time testbed. Findings reveal CTR mode offers high efficiency and
performance trade off. The study offers practical information on cipher
selection which is part of the future development of lightweight encryption
approaches with an (energy-constrained environment) focus. |
|
Keywords: |
Lightweight Encryption, IoT Security, Stream and Block Ciphers, Energy
Efficiency, CTR, OFB, CFB Modes |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID DEEP LEARNING APPROACHES FOR CYBERATTACK DETECTION AND PREVENTION IN
CRITICAL INFRASTRUCTURE SYSTEMS |
|
Author: |
ARUNA VIPPARLA, DR A. MUTHUKRISHNAN, BADDEPAKA PRASAD, KONA KRISHNA PRIYA,
SWETHA PENDEM, SHOBANA GORINTLA, ANIL KUMAR PALLIKONDA |
|
Abstract: |
This paper proposes a new hybrid deep learning framework to predict, detect, and
stop cyber-attacks in critical infrastructure systems. The goal is to increase
the resilience of systems like power grids, transportation networks, and water
supply systems by combining supervised learning (CNNs), unsupervised anomaly
detection (Autoencoders, GANs), and reinforcement learning (DQN). Distinguishing
the concentration of CT images through extensive experiments on real patients'
data indicates that the proposed model is far superior to the traditional
methods in accuracy, precision, and recall by 98.4%, 97.5%, and 98.2%,
respectively, and lowers the rate of false positives to 3.5%. Experiment results
show the efficiency of our hybrid way of detecting these well-known and new
cyber threats with a good detection time (approximately 30ms). This paper helps
advance cybersecurity for critical infrastructure by providing a scalable
real-time defense that can react to new threats, enhancing the overall system
resilience and maintainability. |
|
Keywords: |
Deep Learning, Cybersecurity, Critical Infrastructure, Intrusion Detection,
Convolutional Neural Networks (CNN) |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
DETECTING SOIL EROSION PATTERNS USING CONVOLUTIONAL NEURAL NETWORKS |
|
Author: |
Dr BJD KALYANI , JAYA KRISHNA MODADUGU , Dr CH N SANTOSH KUMAR , Dr SARABU
NEELIMA , N M DEEPIKA , RITHIKA NARENDRAN |
|
Abstract: |
Environmental sustainability and agricultural output are seriously threatened by
soil erosion. This research utilizes deep learning techniques to create an
automated system that detects soil erosion from photographs in or-der to solve
these issues. Specifically, visual data and structured environ-mental parameters
are combined with a Convolutional Neural Network (CNN) to identify soil
conditions and forecast the possibility of erosion. Building a CNN model with
the purpose of analyzing and categorizing photographs according to soil
conditions is the main task of this research. Lay-ers using max-pooling diminish
the dimensionality of feature maps and rec-ord the most important
characteristics, eliminating the unnecessary information. Fully linked layers
then interpret these characteristics to create the final categorization. The
model’s output is a binary classification indicating the presence or absence of
soil erosion. In addition to image data, the model incorporates structured
environmental data such as rainfall and temperature. This data is processed
through a separate branch of the network and combined with CNN's output to
enhance prediction accuracy. By integrating environmental factors, the model can
account for conditions that in-fluence soil erosion, leading to more robust
predictions. The CNN model is trained using the Adam optimizer with binary
cross-entropy as the loss function. This configuration is suitable for binary
classification tasks and helps optimize the model’s performance. The training
process includes 20 epochs with a batch size of 8, and class weights are used to
address any imbalance in the dataset. |
|
Keywords: |
Soil Erosion, Deep Learning, Convolutional Neural Network (CNN), VGG16, Binary
Cross-Entropy |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
BIOFUSIONNET: REAL-TIME MULTIMODAL BIOMEDICAL IMAGE AND SIGNAL SYNTHESIS VIA
PREDICTIVE ENCODING AND CROSS-RESOLUTION ATTENTION MECHANISMS |
|
Author: |
JANARDHAN G, DR.A.UGENDHAR, C NAGESH, RAHUL SURYODAI, HARIPRASAD E, DR.
NICHENAMETLA RAJESH, PRAVEEN KULKARNI |
|
Abstract: |
The increasing complexity of clinical scenarios necessitates advanced biomedical
imaging and sensing frameworks that address real-time challenges, such as
multimodal data integration, noise suppression, and accurate decision support.
Current solutions often fall short in adapting to diverse imaging environments,
real-time constraints, and providing interpretable outputs, limiting their
utility in dynamic clinical workflows. To address these limitations, we propose
a novel, end-to-end system integrating multiple advanced methods for biomedical
imaging and sensing process. Our Adaptive Multimodal Fusion Network (AMFN)
combines multimodal imaging data (MRI, CT, Ultrasound, etc.), dynamically
weighting modalities using an adaptive fusion module to generate enhanced
composite images. Self-supervised learning further reconstructs missing
modalities for robust diagnostic accuracy. For real-time insights, the
Lightweight Dynamic Segmentation (LDS) module uses a transformer-based attention
mechanism in a lightweight neural architecture to segment critical regions of
interest with high precision, even on portable edge devices. Noise and artifact
suppression are managed by the Contextual Noise Elimination Module (CNEM), which
employs convolutional autoencoders and attention-guided filters to enhance image
clarity without sacrificing key clinical details. The Predictive Augmented
Imaging (PAI) module leverages generative adversarial networks to predict and
augment low-quality or incomplete imaging data, ensuring comprehensive results.
Finally, the Explainable Imaging Support System (EISS) delivers transparent,
interpretable clinical predictions with saliency maps and uncertainty
quantification to bolster clinician trust. This integrated framework improves
diagnostic accuracy, supports real-time adaptability, and ensures robust
usability in resource-limited settings, setting a new benchmark for biomedical
imaging in real-world clinical applications |
|
Keywords: |
Adaptive Fusion, Biomedical Imaging, Predictive Augmentation, Noise Suppression,
Explainable AI, Scenarios |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
A COMPREHENSIVE STUDY OF NON-INVASIVE ANEMIA DETECTION TECHNIQUES: CHALLENGES
AND FUTURE DIRECTIONS |
|
Author: |
CHAITRA S P , D R RAMESH BABU |
|
Abstract: |
Anemia is a nutritional deficiency highly prevalent among under developed and
developing nations. Anemia associated with pregnancy poses severe health/
mortality risks to both mother and infant. Early detection of anemia and
treatment is the only solution to reduce the mortality risks. In many developing
and under developed countries, early detection is challenging due to low reach
and higher cost of invasive diagnostics methods. Towards this end, many
non-invasive methods have been developed. These methods use various imaging
modalities like eye conjunctiva, fingertip, palm, lip mucosa for detection of
anemia. These methods can be briefly categorized to conventional and deep
learning methods. Though many techniques have been developed in both categories,
the area of non-invasive detection still presents many issues in terms of
accuracy, false positives etc. Various challenges in image acquisition, feature
engineering, classifier design and optimization etc, need to be addressed to
achieve higher accuracy and lower false positives. This work does a critical
analysis of existing noninvasive techniques for detection and categorization of
anemia. Analysis is done in three categories of image acquisition, feature
engineering and classification. The aim is to identify the research gaps and
present a solution architecture to address those gaps. |
|
Keywords: |
Anemia; NonInvasive Methods; Machine Learning; Deep Learning; Palpebral
Conjunctiva; Fingernail Bed Analysis; |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
THE IMPACT OF AI GENERATED CONTENT TO INCRESE CUSTOMER ENGAGEMENT ON SOCIAL
MEDIA |
|
Author: |
SANNAS SALSABILA R , NILO LEGOWO |
|
Abstract: |
The use of social media now plays a crucial role for companies in interacting
with customers. However, less than 50% of businesses have successfully utilized
it effectively. The main issue is that the content is uninteresting and
irrelevant, leading to a decreaced in customer engagement. Therefore, this
research aims to determine whether AI-Generated Content can enhance customer
engagement on social media. This research was conducted by distributing
questionnaires to residents of DKI Jakarta, Indonesia, who have purchased
products marketed through social media (TikTok and Instagram), where the content
generated was AI Generated Content. In this study, a total of 494 respondents
were collected. This research conduct using the customer engagement in social
media framework (CESM) combined with several other variables that support
studies on customer engagement. This study demonstrates that AI-Generated
Content (AIGC) significantly enhances customer engagement on social media.
Increased social interaction and hedonic motivation effectively address
challenges on social media by generating content quickly while maintaining
visually appealing quality. Additionally, the improvement in satisfaction,
influenced by increased commitment and trust, can trigger active customer
interactions, ultimately boosting customer engagement. Meanwhile, positive
emotion has a relatively lower impact, as customer responses to AIGC's ability
to provide relaxation and humor are often not very strong. Based on the research
findings, it is hoped that business owners can address various issues that arise
during the marketing process by understanding the potential and limitations in
the creation of AI-generated content. |
|
Keywords: |
AI-Generated Content, Social Media, Customer Engagement, Customer Interaction,
AI Content |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AI MARKETING FOR REAL-TIME CUSTOMER SEGMENTATION: AUTOENCODER-ENHANCED
TIME-EVOLVING CLUSTERING |
|
Author: |
IVAN DIRYANA SUDIRMAN , PATAH HERWANTO |
|
Abstract: |
Conventional customer segmentation methods often rely on static models that fail
to capture dynamic consumer behavior. Consumer behavior is influenced by
external factors such as economic shifts, cultural events, and seasonal trends,
necessitating an adaptive approach. This study explores AI marketing
segmentation as a solution to track and predict consumer state transitions. The
research utilizes deep learning-based feature extraction and time-evolving
clustering to analyze grocery store transaction data from April to June 2022,
covering pre-Ramadan, Ramadan, and post-Ramadan periods in Indonesia. An
autoencoder neural network is employed to reduce high-dimensional transaction
data into a latent representation, which is then clustered using MiniBatch
K-Means to identify evolving customer segments. The study also applies
transition analysis to track segment shifts over time. Findings reveal that
customer behavior is fluid, with distinct segment transitions occurring in
response to Ramadan. The transition analysis highlights that static segmentation
models fail to capture these changes. AI-driven marketing segmentation, enables
faster tracking of shifting consumer behaviors, allowing businesses to
proactively adjust their marketing strategies. |
|
Keywords: |
Dynamic Segmentation, Customer Behavior, Clustering, AI Marketing, Deep
Learning, Autoencoder |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ACCELERATING PATHOLOGY REPORT DIGITIZATION: A MULTI-ENGINE OCR AND LLM FRAMEWORK
FOR HEALTHCARE APPLICATIONS |
|
Author: |
NITIN Y. SURYVANSHI, DR. RAKESH J. RAMTEKE |
|
Abstract: |
Digitization and structuring of pathology reports are essential in modern
healthcare for enhancing patient care, data analytics, and medical research.
This study presents a framework called Dual-integrated Text Extraction using
Hybrid OCR Engines (DiText-OCR), which leverages multiple OCR tools and
domain-specific dictionaries to accurately digitize diverse text types,
including printed text and low-quality scans. The extracted text is further
processed using Large Language Models (LLMs) for named entity recognition,
relationship extraction, and data structuring. The resulting structured data are
integrated into healthcare databases and systems, enabling applications in
clinical decision support, research, and analytics while ensuring
interoperability. Despite its effectiveness, the framework faces challenges,
such as handling non-standard report formats, maintaining patient privacy, and
addressing the current limitations of OCR and LLM technologies in medical
contexts. Future research aims to integrate this system with electronic health
records, extend its application to other medical documents, and utilize
structured data for advanced research and predictive analytics. By addressing
these challenges, the proposed framework has the potential to revolutionize
medical data management, ultimately improving patient outcomes, enhancing
clinical efficiency, and fostering innovation in healthcare. |
|
Keywords: |
Pathology report digitization, DiText-OCR framework, Optical Character
Recognition (OCR), Large Language Models (LLMs), healthcare data
interoperability, clinical decision support. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
SPARSE CHANNEL ESTIMATION IN MMWAVE MIMO-OFDM USING BI-LSTM WITH CHANNEL
SPARSITY REGULARIZATION |
|
Author: |
MAMATHA MALVI CHALAVADI , HALUGONA CHANDRAIAH SATEESH KUMAR , VEERENDRA DAKULAGI |
|
Abstract: |
The primary objective of millimeter wave (mmWave) Multiple-Input Multiple-Output
(MIMO) systems is to effectively estimate the Channel State Information (CSI).
Recent research has increasingly utilized the nuclear norm theory to recover a
lower-rank structure of channel matrices. Certain sub-optimal solutions to the
rank minimization issue arise when addressing nuclear norm-based convex
formulations that reduce channel estimation accuracy. To mitigate this, this
research developed the Channel Sparsity Regularization – Bidirectional Long
Short-Term Memory (CSR-Bi-LSTM), effectively estimates a channel in mmWave
MIMO-OFDM. This research incorporates sparsity as a constraint and ensures the
method focuses on essential paths, enhancing accuracy and reducing noise. The
Bi-LSTM network captures the relationships among OFDM symbols and exploits the
spatial correlations between multiple antennas in MIMO. Moreover, it learns the
non-linear relationship among pilot signals and channels, which helps to enhance
accuracy of channel estimation. The developed algorithm obtained accuracy of
0.0993 for 35 learning rate, 0.990 for 45 learning rate, 0.986 for 55 learning
rate, 0.982 for 65 learning rate and 0.976 for 75 learning rate when compared to
other conventional techniques. |
|
Keywords: |
Channel Estimation, Channel State Information, Millimeter wave, Multiple-Input
Multiple-Output and Pilot signals |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
DOMAIN-ADAPTIVE SENTIMENT ANALYSIS THROUGH BAYESIAN NETWORK–LOGISTIC
REGRESSION-BASED PROBABILISTIC DEPENDENCY MODELING |
|
Author: |
RAMAJAYAM G , Dr. R. VIDYA BANU |
|
Abstract: |
Cross-domain sentiment classification presents persistent challenges in opinion
mining due to vocabulary drift, contextual ambiguity, and polarity inconsistency
across product domains. Traditional classifiers trained on a single domain often
fail to generalize, reducing performance when exposed to new, structurally
distinct datasets. This research introduces a probabilistically grounded
sentiment classification framework—Bayesian Network–Logistic Regression
(BN-LR)—designed to address these challenges within multi-domain online review
environments. BN-LR integrates a Bayesian Network to model conditional
dependencies among sentiment-bearing features, capturing latent inter-feature
relationships across syntactic structures. These probabilistic insights are
dynamically incorporated into a logistic regression classifier, enabling
adaptive feature weighting and uncertainty-aware sentiment inference. As an IT
contribution, BN-LR offers a scalable, interpretable, and statistically
principled solution suitable for intelligent recommendation engines, feedback
analytics, and sentiment-based decision systems across digital platforms.
Evaluated on Amazon reviews across four domains, BN-LR consistently delivers
high accuracy without requiring domain-specific retraining or external lexicons.
The proposed framework enhances real-world information systems by enabling
robust cross-domain sentiment generalization, fulfilling a critical need in
adaptive text analytics for IT-driven e-commerce intelligence. |
|
Keywords: |
Cross-Domain Sentiment Analysis, Bayesian Network, Logistic Regression,
Probabilistic Modelling, Online Product Reviews, Domain Adaptation |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
IMPACT OF DIGITAL TRANSFORMATION OF THE HUMAN CAPITAL MANAGEMENT SYSTEM ON STATE
REGULATION OF THE NATIONAL ECONOMY DEVELOPMENT |
|
Author: |
KOSTIANTYN SHAPOSHNYKOV , ANASTASIIA DUKA , GRYGORIY STARCHENKO , IHOR CHORNODID
, DENYS KRYLOV , OLENA SELEZNOVA |
|
Abstract: |
The study reveals the role of human capital as a key factor of sustainable
economic growth and a tool for state regulation of the national economy in the
context of digital transformation. The purpose of the study is to determine the
role of human capital in state regulation of the national economy development in
Ukraine in the context of digital transformation and to substantiate directions
for improving state regulation to increase the management efficiency of this
capital. The research used methods of comparative analysis, statistical
generalization, structural and logical analysis, graphic modeling, and the
inductive-deductive approach. The source of empirical data was indicators of the
State Statistics Service of Ukraine and Eurostat for 2019-2023, which made it
possible to trace changes in functioning of the human capital system before the
start of a full-scale war and under its influence. It is shown that indicators
of the educational level, demographic situation, employment, and digital
competencies of the population are decisive for formation of the competitive
economy. Despite high level of the higher education coverage in Ukraine (28% in
2023), other socio-economic indicators, in particular low GDP per capita, high
unemployment and negative natural growth, indicate low efficiency of the human
potential realization. Emphasis is placed on demographic challenges, in
particular population decline and aging, which threatens stable reproduction of
human capital. Comparative analysis with EU countries shows deep gap in economic
productivity, which requires systematic modernization of approaches to human
capital management. The digital economy is transforming the labor market,
increasing the demand for new competencies, while simultaneously displacing
traditional forms of employment. Both positive (innovation, new forms of
employment, access to the global market) and negative (increasing inequality,
displacement of labor) effects of digitalization are identified. The need for
digital modernization of the human capital management system based on analytics,
forecasting and information technologies is proven. Strategic role of the state
in the human capital development is outlined: from investing in education to
forming the inclusive digital policy. Priority areas of state regulation are
proposed, aimed at increasing digital competencies, adapting the education
system, supporting innovative business, regulating new forms of employment and
ensuring digital equality. |
|
Keywords: |
Human Capital; Digital Transformation; Government Regulation; Employment;
Digital Economy; Human Capital Management; Education; Digital Competencies;
Investment In Human Capital. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
SYSTEM ARCHITECTURE DESIGN AND ANALYSIS: MESH COMMUNITY OF PRACTICE WITH
GAMIFIED SOCIAL COLLABORATION TO STRENGTHEN CYBERSECURITY AWARENESS |
|
Author: |
MAYKIN WARASART , PALLOP PIRIYASURAWONG |
|
Abstract: |
Cybersecurity awareness remains a major challenge due to the evolving nature of
cyber threats and the difficulty of sustaining employees' engagement in
traditional training programs. To address this problem, the concept of a mesh
community of practice is introduced to be an effective way to increase
collaborative awareness and empower the workforce to be better prepared by
keeping pace with the latest threats and mitigation practices. In this research,
the authors aim to design an architecture of mesh community of practice driven
by social collaboration and gamification for cybersecurity awareness enhancement
and evaluate the designed architecture. The study began by stating the designed
architecture, followed by an evaluation that was based on expert judgment
conducted by ten subject matter experts who have related experiences from
various organizations. These experts assess and verify the appropriateness of
the designed system architecture. The evaluation results indicated that the
proposed design (individual component) was rated highest level (mean=4.62,
S.D.=0.48), and was also rated highest level (mean=4.58, S.D.=0.69) for entire
system, respectively. The findings indicate that the proposed architecture can
effectively enhance awareness and engagement, offering a practical model for
continuous cybersecurity learning. |
|
Keywords: |
Cybersecurity Awareness, Gamification, Mesh Community of Practice, Social
Collaboration |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
A SMART AND SECURE CLOUD FRAMEWORK FOR AUTOMATED HEALTHCARE MONITORING THROUGH
VOICE PATHOLOGY DETECTION |
|
Author: |
B NARASIMHA SWAMY , LAKSHMEELAVANYA ALLURI , SHOBANA GORINTLA, N V S PAVAN KUMAR
, V SUMA AVANI , VAHIDUDDIN SHARIFF |
|
Abstract: |
As smart cities continue to advance, the demand for secure, automated, and
real-time healthcare services is growing to ensure sustainable and high-quality
healthcare monitoring. This research introduces a cloud-based framework that
integrates smart healthcare devices, environments, and stakeholders within smart
cities to enhance the affordability, accessibility, and security of healthcare
services. The primary objective is to develop a cloud-based system for real-time
voice pathology detection by analyzing voice and electroglottographic (EGG)
signals to accurately differentiate between normal and pathological conditions.
By leveraging machine learning models such as Gaussian Mixture Models (GMM) for
voice disorder classification, healthcare monitoring can be significantly
improved, enabling early diagnosis and intervention. Furthermore, this framework
aims to enhance the accessibility and scalability of healthcare services by
ensuring secure, automated, and remote health monitoring in smart city
environments. The proposed system collects voice and EGG signals from
internet-connected devices, transmitting them to the cloud for advanced data
analysis. A case study on voice pathology detection (VPD) demonstrated the
effectiveness of this approach, where local features extracted from voice
signals and shape and cepstral features from EGG signals were classified using a
GMM, achieving an accuracy of over 93%. The results are then communicated to
registered healthcare professionals for definitive diagnosis and appropriate
action. By addressing the complex healthcare needs of smart city citizens, this
framework provides a secure, scalable, and sustainable solution for real-time
healthcare monitoring and decision-making, contributing to the advancement of
smart and efficient healthcare services. |
|
Keywords: |
Smart Healthcare Monitoring, Voice Pathology Detection; Smart Cities;
Cloud-Based Healthcare; Electroglottographic (EGG) Signals; Gaussian Mixture
Model (GMM); Real-Time Healthcare Analytics |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ADDRESSING SECURITY CHALLENGES IN DECENTRALIZED ACCESS CONTROL: A BLOCKCHAIN AND
ADAPTIVE LEARNING PERSPECTIVE |
|
Author: |
CHINTUREENA THINGOM , M. BEULAH VIJI CHRISTIANA , K. BALAJI , T. RAVI ,
S.B.GOPAL , T VIJAY MUNI , A. THILAGAVATHY , P. JOHN AUGUSTINE , S. NOORAY
SASHMI , R G VIDHYA |
|
Abstract: |
The Blockchain is also quickly becoming a disruptive technology for providing
people with never-before-seen self-governance by virtue of it being secure,
transparent, and decentralized in nature. Due to its nature, it has also come to
form the basis for various industry applications apart from serving as an early
impetus for next-generation paradigms like cloud as well as edge computing. For
security concerns data-sharing applications with distributed access control, the
current work proposes a new disruptive approach as an adaptive learning model
for addressing the above security concerns. Python programming is also
implemented for the system validation. A comparison is also made between ALM of
the new proposed work with available encrypting algorithms like RSA and AES.
Various parameters are also quantified by the research work, and data thus fed
is tabulated in fine details. The result demonstrates enhanced security,
scalability, as well as computational efficiency by the proposed algorithm. |
|
Keywords: |
Cloud Computing; Edge Computing; Decentralized; Blockchain Skillset;
Transparent. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
TRANSFER LEARNING STRATEGIES FOR OPTIMISING FACIAL RECOGNITION ACCURACY |
|
Author: |
DR. V. V. R. MAHESWARA RAO , SAROJA PATHAPATI , DR. L. K. SURESH KUMAR , S.
NARAYANASAMY , AHMED ALKHAYYAT , SAI KRISHNA EDPUGANTI |
|
Abstract: |
Human-computer interaction relies heavily on facial emotion recognition (FER),
yet creating reliable models is difficult because of inter-subject variability
and a lack of personalised data. In this study, we offer a method for improving
the accuracy of face recognition with convolutional neural networks (CNNs) that
is based on transfer learning. Initially, a generic model is trained using the
massive AffectNet dataset; next, it is fine-tuned using the smaller AMIGOS
dataset, which contains subject-specific data. The solution is aimed at
one-dimensional emotion prediction (valence and arousal) and the Root Mean
Square Error (RMSE) of 0.09 and 0.10, respectively. In addition, testing is done
for alternative methods of data sampling active and passive to use and to
minimize the cost of labelling and accuracy by using greedy sampling and Monte
Carlo Dropout Uncertainty Estimation. This approach offers a scalable, efficient
solution for personalised FER systems with real-world applications in
psychological health monitoring, adaptive interfaces as well as affect-aware
computing. The results demonstrate that only 20-30% of labelled personal data is
adequate for near-optimal performance. |
|
Keywords: |
Facial Recognition, Transfer Learning, Valence, Arousal, Convolution Neural
Network, Deep Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN OF LOW POWER 9T SRAM CELL WITH EXPANDED NOISE MARGIN |
|
Author: |
THAKURENDRA SINGH , VINAY KUMAR TOMAR |
|
Abstract: |
SRAM (Static random access memory) cells operate at lower power supply voltages
with higher stability got major attention due to their utmost demand in advanced
microelectronics devices. In scaled technologies, the power dissipation and
stability of SRAM cells become of prime importance due to the aggressive supply
voltage scaling. In this context, a novel 9T SRAM memory cell has been proposed
with a commendable improvement in both read stability and write stability of the
cell along with a reduction in power dissipation. The characteristics parameters
of the proposed 9T SRAM cell are compared with state of-the-art designs like six
transistors (6T), tunable access transistors 8T (TA8T), D2P11T (data dependent
power supply 11T), and 11T SRAM cells. The RSNM (read static noise margin) of
the novel proposed 9T SRAM cell is enhanced by 1.95×, 1.92×, 1.87×, 1.90× as of
6T, TA8T, D2P11T and 11T SRAM cells respectively. It occurs as a result by
utilization of an isolated read port during read operation. Further, the write
ability of the proposed 9T SRAM cell is increased by 1.12×, 1.05×, 1.12×, 1.14×
as of 6T, TA8T, D2P11T and 11T SRAM cells respectively. The read power of 9T
SRAM cells also gets reduced by 1.48×,1.16×, 1.38×, 1.66× as of 6T, TA8T,
D2P11T, 11T SRAM cells. In addition to this, read access time of the proposed 9T
cell is reduced by 1.08×, 1.01×, 1.04×. 1.15× as of 6T, TA8T, D2P11T, 11T SRAM
cells. In proposed 9T cell, the Ion to Ioff ratio is larger, i.e., 3.5×, 3.45×,
3.54×, as compared to 6T, TA8T, 11T SRAM cells. It shows the applicability of a
9T SRAM cell for a large density array. The simulation work has been performed
with Cadence Virtuoso at the 45nm technology node. |
|
Keywords: |
Microelectronics, Low power VLSI design,SRAM, Power consumption, stability. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI-OBJECTIVE OPTIMAL DG PLACEMENT AND SIZING IN DISTRIBUTION SYSTEMS USING
LSF-MALO: A STUDY ON VARIOUS DG CONFIGURATIONS AND LOAD |
|
Author: |
NUR ATIQAH ABDUL RAHMAN , ZULKIFFLI ABDUL HAMID , NUR ASHIDA SALIM , ISMAIL
MUSIRIN |
|
Abstract: |
This study investigates the optimization of Distributed Generation (DG)
placement and sizing in power distribution systems using the Loss Sensitivity
Factor-Mutated Ant Lion Optimizer (LSF-MALO) algorithm. The goal is to minimize
real power losses, enhance voltage stability, and reduce economic losses by
strategically deploying DG units, such as photovoltaic (PV), diesel, and wind
turbines, in distribution networks. The LSF-MALO algorithm’s effectiveness is
evaluated using the IEEE 69-bus system, under varying load scenarios and DG
configurations. The results are compared with other optimization methods
(LSF-Analytical, EP, and MALO), showing that LSF-MALO consistently outperforms
them, particularly in multi-type DG configurations like PV-Diesel-Wind and
PV-PV-Wind. The findings demonstrate that LSF-MALO not only achieves significant
reductions in power losses and improvements in voltage profiles but also
provides substantial cost savings, highlighting its potential for improving the
efficiency and sustainability of modern power distribution systems. The study
underscores the importance of LSF-MALO as a robust tool for optimizing DG
integration in the context of decentralized, eco-friendly energy systems. |
|
Keywords: |
Distributed Generation (DG), Loss Sensitivity Factor-Mutated Ant Lion Optimizer
(LSF-MALO), Distribution Network, Multi-Objective, Voltage Stability,
Optimization Technique |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
AUTOMATED DETECTION AND SEVERITY CLASSIFICATION OF KNEE OSTEOARTHRITIS USING A
SCALABLE CONVOLUTIONAL NEURAL NETWORK |
|
Author: |
RYAVADI BUCHA RAO, KASUKURTHI VENKATA RAO |
|
Abstract: |
Osteoarthritis of the knee is a common degenerative joint condition that lowers
quality of life and causes disability, especially in people over 60. It is
brought on by the knee joint's degenerating cartilage, which causes bone-on-bone
contact, discomfort, stiffness, swelling, and restricted movement. Deep neural
networks, specifically “CNNs (Convolutional Neural Networks)”, have shown a lot
of promise in medical image processing for the identification and classification
of diseases. In order to categorize knee osteoarthritis utilizing X-ray pictures
into five groups—Minimum, Healthy, Moderate, Doubtful, and Severe—this research
presents SCSNet, a deep learning model. Precision, recall, F1 score, and
accuracy had been employed to compare the model's performance to three
pre-trained transfer learning models: “VGG-16”, “ResNet-50”, and “Xception”.
According to experimental results, SCSNet outperformed the transfer learning
models in every metric assessed, achieving higher performance with 98% accuracy. |
|
Keywords: |
Knee Osteoarthritis, Deep Learning, Classification, Machine Learning, Scalable
Convolutional Neural Network, VGG16. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
BIG DATA AND ARTIFICIAL INTELLIGENCE AS TOOLS FOR OPTIMIZING THE BUSINESS
PROCESS MANAGEMENT IN ENTERPRISES |
|
Author: |
SVITLANA BELOUSOVA , SVІTLANA PROKHORCHUK , NADIIA BAHAN , OLEKSANDRA TSYRA,
YURI CHERNENKO, DENYS TKACH |
|
Abstract: |
The aim of the study is to identify relationships between the level of using the
artificial intelligence (AI) technologies and Big Data at enterprises, digital
intensity of business and economic development of countries, in particular the
GDP level per capita. The focus is on how the AI and Big data transform business
processes, increasing efficiency, adaptability and competitiveness of
enterprises. The study used descriptive statistics, correlation and regression
analysis, data visualization, as well as the residual analysis to identify
countries that deviate from general trends. The source of data was EU enterprise
statistics on the digitalization level and the AI implementation, excluding the
financial sector. It was found that there is positive relationship between
digital intensity of business and the AI implementation level: countries with
higher digital indicators are much more likely to use artificial intelligence
technologies. There is also the dependence on the economic development level:
countries with higher GDP per capita have a higher percentage of enterprises
using the AI. At the same time, countries were identified that demonstrate a
higher or lower level of implementation than expected - this indicates
availability of other factors (policies, institutions, innovation culture) that
affect digital transformation of business. Special attention was paid to the
analysis of how the use of Big Data and the AI create new points of management
decision-making in business processes. These technologies allow enterprises not
only to automate typical operations, but also to carry out more accurate and
dynamic management at the stages of planning, forecasting, implementation and
monitoring of results. As a result, the management architecture itself changes:
new points of control and adjustment appear - for example, in real time it is
possible to adjust procurement, logistics or customer service strategies based
on predictive analytics. This contributes to flexibility of business models and
creates the basis for proactive efficiency management. The comprehensive
approach to analyzing countries' readiness for the AI implementation, taking
into account both digital and economic characteristics, is proposed, which
allows for more accurate assessment of the potential of digital transformation
in the business environment. The deeper understanding of new points of
intervention in business processes that are opened up using the AI and Big Data.
Results of the study can be used to shape state digitalization policies, develop
strategies to support business innovation, and stimulate the AI implementation,
especially in countries with potential that is currently not fully realized. The
described management points can be integrated into enterprise management systems
to increase productivity. |
|
Keywords: |
Digital Intensity, Artificial Intelligence, Big Data, Business Processes, EU
Enterprises, Economic Growth, Digital Transformation, Performance Management,
Innovation. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
OPTIMIZING VERTICAL DISTRIBUTION TECHNIQUES FOR HERITAGE MULTIMEDIA DATABASES:
CHALLENGES AND SOLUTIONS |
|
Author: |
SHYMAA H. MAHMOUD, AHMED E. ABDELRAOUF, NAGWA L. BADR, MOHAMED I. ALI |
|
Abstract: |
Distributed Multimedia Database Systems (DMDBSs) face critical challenges of
sustaining robust performance, influenced by factors such as data fragmentation,
allocation, replication, and site clustering. This research introduces an
approach for vertical fragmentation of heritage multimedia databases by
leveraging the silhouette analysis method alongside the K-means++ clustering
algorithm. The proposed approach offers superior centroid initialization and
objective cluster validation, resulting in higher cluster quality and improved
data compactness. To evaluate the proposed method, experiments were conducted on
a real-world dataset from the Antiquities Museum at the Bibliotheca Alexandrina.
Previous approaches using K-means clustering with refinement aim to improve data
localization by adjusting attribute clusters. However, such methods are limited
by random centroid initialization and lack objective validation of cluster
quality. Results show that our approach outperforms both the K-means with
refinement method and the Differential Bond Energy Algorithm (DBEA), achieving
higher fragmentation quality, better data locality, and lower execution time.
These findings highlight the practical relevance of the proposed approach for
enhancing the efficiency of multimedia data distribution in DMDBSs. |
|
Keywords: |
Distributed Multimedia Database Systems (Dmdbss), Vertical Fragmentation,
Feature Scaling, K-Means++ Clustering, Silhouette Analysis, Heritage Multimedia
Databases. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ONTOLOGIES FOR ONCOLOGICAL RADIOLOGY: CHALLENGES AND OPPORTUNITIES |
|
Author: |
MARIO DI LONARDO , CARLO SARTIANI |
|
Abstract: |
Medical imaging examinations, especially Magnetic Resonance Imaging (MRI),
interpreted by radiologists in the form of narrative reports, are used to
produce and confirm diagnoses in clinical practice at different levels. Being
able to accurately and quickly identify the information scattered in the
radiologists’ narratives has the potential to reduce workloads, support
clinicians in their decision processes, triage patients to get urgent care or
identify and cluster patients for research purposes. This is especially critical
within the context of the Tumor Boards, multidisciplinary groups made up of
different specialists, who regularly meet to discuss oncological patients
potentially needing pre/post-surgery treatments and to make diagnostic and
therapeutic decisions for them. Nowadays, it is still difficult to access and
analyze radiology reports both effectively and efficiently at scale, due to
their unstructured nature, the conciseness and often crypticity of the medical
jargon used, and the background knowledge usually required for interpreting them
correctly and making high-level correlations and conclusions. Privacy concerns
introduce further difficulties and impose the adoption of local tools rather
than cloud-based ones, hence preventing the use of popular LLMs. Ontologies
represent an important tool for easing the automatic process of medical records
and radiology reports. Indeed, they can be used to add structure to otherwise
unstructured texts; furthermore, they play a key role in data exchange and
sharing, as they enable semantic interoperability. In this review paper we
will introduce the problems related to the use of ontologies in the medical
domain, with particular emphasis on radiology reports, and discuss the prominent
advantages that a systematic use of ontologies would generate. |
|
Keywords: |
Ontologies, RDF, Mappings, Annotations, ML |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
Title: |
ADVANCING POST QUANTUM CRYPTOGRAPHY DEVELOPING SECURE ENCRYPTION METHODS
RESISTANT TO QUANTUM ATTACKS |
|
Author: |
REVATHI TALARI , LEELAVATHY.B , DR. ANANTHAN NAGARAJAN , DR.B. HEMANTHA KUMAR ,
DR RAGHAVENDER K V, DR. N. NEELIMA , ARUNA VIPPARLA |
|
Abstract: |
With the current increase in the rate of quantum computing, traditional
cryptographic algorithms face a significant risk of compromise, as they are
based on mathematical problems that quantum algorithms can efficiently solve.
One solution to this challenge has come up in the form of post-quantum
cryptography (PQC). By having a hybrid encrypted solution that combines the two
different encryption algorithms, NTRU quantum-resistant and fast AES
symmetric-key, with a well-integrated combination presented in this paper, we
aim to usher in secure and safe data in the post-quantum era. This mixed regime
is the most advanced approach that combines classical and quantum encryption. We
compare the performance of the system with that of standalone AES, NTRU, and
RSA-based systems, examining aspects such as encryption, decryption, key
generation, and key size. Benchmarks indicate that the hybrid system has an
overhead, but its performance is passable; encryption times are 60ms and 14000ms
for 1 KB and 5000 KB of data, respectively, compared to 12ms for 1 KB and 50ms
for 5000 KB by AES and NTRU. The same can be said about decryption, with the
hybrid system being more efficient than NTRU but still slower than AES. The NTRU
key generation times were depicted as unfavorable compared to those of AES, and
this margin was seen to increase as the key sizes of NTRU were varied. Compared
to executing the same algorithm on a classical computer, the hybrid system is
relatively computationally expensive but offers strong quantum resistance. A
hybrid encryption system based on the AES-NTRU is a solution to the quantum
computing threat to data security. Later projects will be based around various
efforts to optimize the NTRU system, also including hardware acceleration
opportunities and an increase in the system utilization in real-life
environments. |
|
Keywords: |
Hybrid Encryption, Post-Quantum Cryptography, NTRU, AES, Quantum Resistance,
Cryptographic Efficiency |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th September 2025 -- Vol. 103. No. 17-- 2025 |
|
Full
Text |
|
|
|