|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
April 2025 | Vol. 103
No.8 |
Title: |
ENHANCING SQL INJECTION (SQLI) MITIGATION BY REMOVING MALICIOUS SQL PARAMETER
VALUES USING LONG SHORT-TERM MEMORY (LSTM) NEURAL NETWORKS |
Author: |
NWABUDIKE AUGUSTINE, ABU BAKAR MD. SULTAN, MOHD HAFEEZ OSMAN, KHAIRONI YATIM
SHARIF |
Abstract: |
Web applications are increasingly targeted by cyber attacks. With SQL injection
being a significant vulnerability, it caused an estimated USD4 billion global
economic loss in 2022. Research has been conducted to explore methods to
mitigate the impact of these attacks either by detecting them immediately or
preventing them in their tracks. However, conventional approaches such as
rule-based or signature-based systems have limitations as they cannot adapt to
new or obscure attack patterns. This study explores the potentials of using Long
Short-Term Memory (LSTM) neural network-based architectures for sanitization of
SQL parameter values from malicious characters and shows that the result
demonstrates that LSTM is the top performer, consistently achieving near-perfect
accuracy, precision, recall, and F1-scores of 99.66% effectively. In a nutshell,
this work has made a very strong and scalable contribution to web application
security problems and demonstrates the possibility of SQLi mitigation using LSTM
networks. By addressing such a critical gap in the literature, this study is a
significant progress in the field of cyber security and robustness against
obfuscation of attacks. |
Keywords: |
SQL Injection, Long Short-Term Memory, Web Application Security, Machine
Learning, Cyber Attack |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
USING BIG DATA TO DEVELOP DIGITAL MARKETING STRATEGIES: A CASE STUDY |
Author: |
YULIIA SOKOLOVA, OLHA KATUNINA , NADIIA PYSARENKO , INNA KLIMOVA , OLEH
KOVALCHUK |
Abstract: |
The power of Big Data to deliver tailored, hyper-focused content to audiences,
segment audiences into factions, forecast trends, and fine-tune marketing and
advertising spending is hard at work in today’s competitive digital landscape –
and, for many businesses, challenging to see. So, this study focuses on
exploring the role of Big Data in the technique development of digital marketing
strategies, with Amazon as a case study over the period 2018–2023. The goal is
to study how data-driven strategies influence the most crucial marketing outcome
metrics, including customer engagement, conversion rates, return on investment
(ROI), and advertising efficiency. The study uses a mixed-method approach to
combine Amazon's qualitative content analysis method and quantitative analysis
via regression models and Difference-in-Differences (DiD) estimation. Results
suggest that personalisation, segmentation, and real-time advertising
optimisation significantly improve performance – post-intervention campaigns
increase conversion rates by 3.1% and reduce customer acquisition costs by $10.
Further, regression analysis reinforces the positive effect of Big Data
strategies, and more profoundly, advertising optimisation has the most
significant impact on ROI. Based on the findings in this study, the study
recommends that policy be based on continuous loops of feedback and predictive
analytics to facilitate adaptive, efficient, and targeted marketing efforts.
However, it will also be critical to establish clear data governance frameworks
and ethical data practices to maintain consumer trust and compliance with
privacy regulations. |
Keywords: |
Content Personalisation, Market Trends Forecasting, Advertising Campaigns
Optimisation, Audience Segmentation. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
LEVEL OF GPT CHAT USE AMONG STUDENTS AND FACTORS AFFECTING ITS USE |
Author: |
BERTON ATALLAH BRAHMACARI PARIKESIT, Drs. TUGA MAURITSIUS |
Abstract: |
One of the preferred AI language models that can be used in natural language
interaction with consumers is Open AI's ChatGPT. Based on a quantitative
research methodology with a questionnaire survey method, this study examines the
factors that influence users' acceptance and use of ChatGPT using the theory of
acceptance model (TAM). Respondents in this study were 100 ChatGPT users.
Acceptance and use of ChatGPT through the theory of acceptance and use (TAM).
The findings are six factors, there are five factors accepted, but one factor is
rejected, namely the perceived interactivity factor User participation with
ChatGPT is explained by perceived interactivity and shows that in the future
people will use chat gpt to help them in terms of learning. |
Keywords: |
Chat GPT, TAM, PLS-SEM, Education, |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ZERO TRUST SECURITY FOR CARD-NOT-PRESENT TRANSACTIONS: EXTENDING EMV-LIKE
CONTINUOUS AUTHENTICATION AND ADAPTIVE RISK VALIDATION ACROSS PAYMENT NETWORKS |
Author: |
MAROUANE AIT SAID , ABDELMAJID HAJAMI , AYOUB KRARI |
Abstract: |
Card-Not-Present (CNP) fraud remains a critical challenge [1][2][3] in digital
payments, exploiting gaps between merchants, acquirers, and issuers within
trusted payment networks. While EMV technology ensures dynamic authentication
for Card-Present (CP) transactions [4][5], CNP transactions lack equivalent
protection [6], often bypassing real-time risk assessment. This pa-per
introduces a Zero Trust security model for CNP transactions, extending EMV-like
continuous authentication and adaptive risk validation across payment
stakeholders without modifying the ISO8583 messaging standard. By leveraging
AI-driven risk scoring, behavioral biometrics, device finger-printing, and
multi-factor authentication (MFA), the model ensures continuous verification
from initiation to authorization. Risk scores dynamically evolve across the
payment chain, enabling real-time decision-making. Experimental results
demonstrate a 92.1% fraud detection accuracy, a 36% reduction in false
positives, and real-time processing within 310 milliseconds per transaction.
This approach bridges the security gap in CNP transactions, aligning with
PCI-DSS, PSD2, and EMVCo standards while preserving user experience. By
extending Zero Trust principles across the payment network, this work
establishes a scalable and resilient framework for securing digital
transactions. |
Keywords: |
Zero Trust Security, Card-Not-Present Fraud, EMV-Like Authentication, ISO8583,
Continuous Authentication, Adaptive Risk Validation, AI-Driven Fraud Detection. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
A STUDY OF ONLINE VIDEO CONSUMPTION BEHAVIOR THROUGH SOCIAL MEDIA AMONG
GENERATION Z IN INDONESIA |
Author: |
BRYAN HENRY , VIANY UTAMI TJHIN |
Abstract: |
The advancement of technology has also led to the increasing prominence of
social media. The exposure to social media, particularly among Gen Z, has raised
concerns about the potential for online video addiction on these platforms. This
study aims to analyze the factors influencing social media use and its potential
impact on online video addiction among Gen Z. The research employs a
quantitative approach using the Structural Equation Modeling (SEM) method. Data
collection was conducted through a questionnaire completed by 400 respondents.
The findings indicate that factors such as internet literacy and loneliness have
a significant positive effect, whereas FOMO does not have a significant positive
impact on social media use and the potential for online video addiction.
Additionally, sensation seeking does not have a significant positive moderating
effect on social media use and, therefore, does not influence the potential for
online video addiction. |
Keywords: |
Internet Literacy, Loneliness, FOMO, Social Media Use, Online Video Addiction,
Sensation Seeking |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
AI-DRIVEN FRAUD DETECTION AND SECURITY SOLUTIONS: ENHANCING ACCURACY IN
FINANCIAL SYSTEMS |
Author: |
JANJHYAM VENKATA NAGA RAMESH , DR SUDHANSU SEKHAR NANDA , VAMSI KRISHNA
CHIDIPOTHU , VEMULA JASMINE SOWMYA , AMIT VERMA , TAOUFIK SAIDANI |
Abstract: |
Financial institutions worldwide are facing a major challenge in identifying
unauthorized transactions. Financial organizations need sophisticated fraud
detection systems to safeguard their financial stability and maintain customer
confidence; nevertheless, the design of such systems may be hindered by many
complexities. The infrequency of illegal transactions and the disproportion in
several transaction datasets, namely the few occurrences of fraudulent
transactions relative to legitimate ones, exemplify these characteristics. The
dataset disparity may undermine the accuracy and effectiveness of a fraudulent
activity detection program. Disseminating customer information to provide a more
effective centralized approach is impractical, since each bank is required to
comply with data protection standards. To ensure the user experience remains
unaltered, the fraud detection solution must be both accessible and easily
observable. As a result, this study presents an innovative approach in tackling
these issues by combining Federated Learning (FL) with Explainable AI (XAI). FL
financial institutions work together to train a model that can identify
fraudulent activities without sharing client information directly. This protects
the privacy and security of client information. The addition of XAI makes it
definite that human specialists can understand and analyse the model's results,
which gives the system more reliability and transparency. The FL-based
fraudulent detection system regularly shows high-performance measures, as shown
by experiments employing real transaction datasets. This study demonstrates that
federated learning might serve as an effective and confidential instrument in
combating fraudulent activity. |
Keywords: |
Federated Learning, Explainable AI, Fraud Detection Systems, Data Privacy,
Client Information |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
THE ROLE OF DIGITAL TWIN IN OPTIMIZING SMART LAUNDRY SERVICES: ANALYZING
CUSTOMER ACCEPTANCE AND LOYALTY THROUGH TAM AND IS SUCCESS MODEL |
Author: |
SHIFA REGITA , VIANY UTAMI TJHIN |
Abstract: |
Traditional laundry services often struggle with inefficiencies, high
operational costs, and a lack of real-time monitoring. To address these
challenges, this study examines the role of Digital Twin (DT) technology in
optimizing smart laundry services. DT enables real-time data synchronization,
predictive maintenance, and enhanced decision-making by creating a virtual
replica of physical assets. This research adopts the Technology Acceptance Model
(TAM) and the IS Success Model to analyze customer acceptance and loyalty toward
DT-based laundry services. The findings confirm that Information Quality, System
Quality, and Service Quality significantly influence Perceived Usefulness and
Perceived Ease of Use, which in turn impact Customer Satisfaction and Customer
Loyalty. The study also highlights that accurate information, a stable system,
and high-quality service contribute to a positive user experience. By
integrating DT, smart laundry services can improve operational efficiency,
reduce downtime, and enhance customer trust. However, challenges such as limited
market awareness and the adaptation of DT in service-based businesses remain.
The results reinforce that DT is a promising innovation for modernizing laundry
operations, making them more reliable, efficient, and customer-centric. |
Keywords: |
Digital Twin, Smart Laundry, Customer Satisfaction, Customer Loyalty TAM, IS
Success Model |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
GENERATING OPTIMAL TEST CASES USING ELITIST GENETIC ALGORITHM |
Author: |
ASHOK KUMAR BANDLA, C S PAVAN KUMAR, DR K KOTESWARA RAO, DR O. RAMA DEVI, DR
KALAIVANI K, KOLACHANA SWETHA, DR. CHANDANAPALLI SURESH BABU |
Abstract: |
In order to reduce the number of test cases that do not significantly improve
the mean of test coverage or where the test cases are unable to isolate
mistakes, this research study examines the use and efficacy of the Elitist
genetic algorithm. In this work, a genetic algorithm is used to help minimize or
optimize the test cases. The algorithm creates the initial population at random,
determines the fitness value using coverage metrics, and then uses genetic
operations such as selection, cross-over, and mutation to select the offspring
in successive generations. Specific genetic modeling processes may differ from
standard genetic algorithms depending on the task. This generation process is
performed until the fitness values remain unchanged for two successive
generations. Convergence or a reduced test case is reached when the data
generation remains unchanged for two iterations. The findings of study reveal
that, genetic algorithms can greatly reduce the amount of the test cases |
Keywords: |
Elisist GA, Test Case, Optimal, NP Complete, Minimize |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
AN OPTIMIZED DEEP LEARNING MODEL FOR EARLY DETECTION OF RHEUMATOID ARTHRITIS
USING KNEE X-RAYS |
Author: |
SALONI FATHIMA , G. SHANKAR LINGAM |
Abstract: |
Rheumatoid arthritis (RA) is a chronic autoimmune condition that requires early
diagnosis to prevent irreversible joint damage. While deep learning models have
shown promise in automated RA detection, existing approaches suffer from data
imbalance, suboptimal feature extraction, and poor generalizability, limiting
their clinical applicability. This study proposes an optimized Convolutional
Neural Network (CNN) model, incorporating image augmentation and class balancing
techniques to improve RA detection accuracy from knee X-ray images. Unlike
previous studies, which often overlook the impact of data augmentation on model
performance, our work demonstrates its effectiveness in addressing data
imbalance and enhancing model robustness. We trained and validated our model
using the Kaggle knee X-ray dataset, applying image augmentation to expand
training samples and oversampling to balance class distributions. The CNN was
optimized through rigorous hyperparameter tuning. Our optimized CNN model
achieved a high accuracy of 94%, outperforming baseline deep learning models.
Data augmentation and oversampling significantly improved model performance,
proving their effectiveness in medical imaging tasks. Our findings establish a
novel deep learning framework for RA detection, demonstrating the importance of
data augmentation and optimization in improving diagnostic accuracy. This work
contributes to the growing field of AI in medical imaging by offering a scalable
and interpretable solution for automated RA detection |
Keywords: |
Rheumatoid Arthritis, knee X-ray images, statistical augmentation, deep
learning, CNN. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
OPTIMIZING BUOYANT WIRELESS SENSOR NETWORKS: EVALUATING ALBATROSS ADAPTIVE
ACOUSTIC ROUTING PROTOCOL FOR ENHANCED PERFORMANCE AND RELIABILITY |
Author: |
DIVYA JOSE J , Dr. D. VIMAL KUMAR |
Abstract: |
The proposed Albatross Adaptive Acoustic Routing Optimization (AARO) protocol
represents a breakthrough in IT-driven solutions for underwater wireless sensor
networks. By leveraging adaptive routing mechanisms and bio-inspired
optimization, AARO addresses critical challenges in underwater communication,
including high energy consumption, network instability, and packet loss. This
research contributes to IT by integrating intelligent routing decisions based on
real-time environmental data, enhancing network longevity, and improving data
transmission reliability. The findings demonstrate that AARO significantly
outperforms existing protocols, offering a scalable and energy-efficient
solution for buoyant WSNs. The research aligns with advancements in IT, enabling
efficient data acquisition in marine exploration, disaster management, and
environmental monitoring, reinforcing the role of IT in solving real-world
challenges. |
Keywords: |
Buoyant WSN - Adaptive Acoustic Routing Protocol - Underwater Communication -
Routing - Albatross Optimization. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
A NOVEL METHOD FOR FLEXIBLE CAPACITY MULTI-IMAGE HIDING |
Author: |
HESHAM F.ABDELRAZIK , AHMED S.ELSAYED , SALLY S.ISMAIL , ABEER M.MAHMOUD |
Abstract: |
The noteworthiness of multiple image-hiding has increased in the current digital
era. This owes to the growing demand for robust data protection and secure
communication. The success of the multiple image hiding algorithm depends on
concealing multiple secret images in one cover image and perfectly recovering
all secret images at the receiving end. Unfortunately, the two primary
challenges with current multi-image hiding methods are preserving high capacity
while reducing visual distortions and guaranteeing precise recovery of every
hidden image. The main challenge is to keep the hidden image visually obscured
while concealing as much information as possible from a single cover image. In
this paper, a novel framework for concealing multiple images using an invertible
hiding neural network and image super-resolution is proposed. Extensive
experiments on DIV2K, ImageNet, and COCO demonstrated that the proposed
framework obtained significant results in terms of both imperceptibility and
recovery accuracy of the secret image. Achieving an average of 27.16 dB, 26.65
dB, and 32.13 dB of PSNR and 0.810, 0.783, and 0.885 of SSIM for hiding four
secret images with upscaling factor 2, respectively. The proposed framework
successfully concealed and revealed up to 64 secret images. These results
confirm the effectiveness of the proposed method in achieving both high
invisibility and enabling multiple image concealment with a large capacity. |
Keywords: |
Image hiding, Image rescaling, Invertible hidden neural network, Image
super-resolution, Deep learning. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
DISEASE RISK PREDICTION USING ELECTRONIC HEALTH RECORD DATA BASED ON FLYING
SQUIRREL SEARCH OPTIMIZATION WITH BIDIRECTIONAL – LONG SHORT TERM MEMORY |
Author: |
PRASANTHI YAVANAMANDHA , D. S. RAO |
Abstract: |
Nowadays, the attention towards effective disease risk prediction has increased,
due to its importance in monitoring the future health status of patients. This
prediction helps to provide the right treatment for patients to prevent severe
stages of diseases. However, the existing risk prediction models based on
Machine Learning (ML) have limitations in learning temporal information from the
Electronic Health Record (EHR) data. To overcome this, a Flying Squirrel Search
Optimization (FSSO) algorithm for feature selection and Bi-directional Long
Short Term Memory (Bi-LSTM) is proposed to enhance the accurate disease risk
prediction using EHR data. The proposed FSSO based feature selection method
efficiently reduces the dimensionality of EHR data and helps to identify the
most predictive features for disease risk accurately. By utilizing the
bidirectional layers, Bi-LSTM model learns the dependencies in both past and
future directions, which makes the model suitable for capturing comprehensive
temporal patterns that influence disease risk. Initially, the EHR data is
acquired and preprocessed by solving the missing values in the dataset to
enhance the risk prediction process. Experimental results of the proposed method
achieved an accuracy of 0.938 for MIMIC-IV dataset when compared to existing
methods such as XGBoost and RDF. |
Keywords: |
Bi-Directional Long Short Term Memory, Disease Risk Prediction, Electronic
Health Record, Flying Squirrel Search Optimization, Machine Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
HYBRID TYPE ERROR CORRECTION CODE TO INCREASE RELIABILITY IN SRAM MEMORIES |
Author: |
P V GOPIKUMAR , R MANIKANDAN , C RAVI SHANKAR REDDY |
Abstract: |
This paper presents implementation of mixed mode Error Correction Coder
architecture that is capable of detecting and correcting both random and burst.
The proposed architecture executes two different types of architectures in which
one of them aims at targeting the random while the other aims at burst. The
diagnosis of random faults is obtained by employing the modified decimal matrix
coding whereas the burst diagnosis is done by flexible unequal error method. The
proposed architecture gains an adequate advantage with respect to area overhead
and power consumption. The proposed architecture is implemented in cadence. In
the front end design the parameters are estimated by using out by GENUS and on
similar lines, the back end design is carried out by using out by INNOVUS and
the performance metric generated includes Area Report, Power Report, and timing
report and chip layout. The area, power and delay overhead of the proposed
architecture is found to be 50088.491, 20.7mW and 2.11ps which are very small
compared to other ECC that are taken for experimental purpose and this low
values of area, power and delay are mainly due to the use of encoder reuse
technique. Further the experimental results conclude fact that the proposed
architecture can achieve 100% of error correction up to six bits and offers
correction rate of 80.76 and 68.43 for burst of length 7 and 8 respectively. The
error correction capabilities for random error is found to be 100% for Quintuple
Bit Error and there after we observed slight decrement in error correction
capability for Sextuple Bit, Septuple bit and Octuple Bit. The percentage of
error correction rate for Sextuple bit, Septuple bit and Octuple bit is found to
be 46, 4 and 2 respectively. The increased reliability of this hybrid type error
correction code is achieved at a cost of slight increment in the overhead bits. |
Keywords: |
Random errors, Burst errors, Modified Decimal Matrix code (MDMC), Flexible
Unequal Error Control (FUEC), and Error Correction Code (ECC). |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
AN EFFECTIVE FILTERING EXTENSION TECHNIQUE OF SMOTE FOR CONTROLLED SYNTHETIC
DATA GENERATION |
Author: |
SOMIYA ABOKADR , AZREEN AZMAN , HAZLINA HAMDAN , NURUL AMELINA NASHARUDDIN |
Abstract: |
Imbalanced dataset poses a significant classification challenge within the realm
of machine learning and has gained growing prominence due to the need to handle
real-world data that is usually imbalanced and skewed. Numerous resample
techniques have been proposed in the literature to improve the performance by
solving the imbalanced problem. But this situation becomes more complex when one
class contains a significantly larger number of examples compared to the other
classes. Under such circumstances, machine learning-based algorithms can
accurately identify examples from the majority class but often struggle or
likely fail to recognize instances from the minority class. However, these
minority class examples often contain crucial and valuable information. In
addition, generating new instances to balance the minority classes by curating
some rules by the domain expert as in the standard SMOTE, may not be suitable
for some instances leading to misclassification by the model. Therefore, this
paper proposed a novel technique namely filter extension of the SMOTE algorithm
based on optimised kernel trick on SVM to control the generation of balanced
synthetic data in overlapped samples. The main objective of this paper is to
predict the minority class instances accurately and robustly addressing the
imbalance and overlapping issues. The proposed technique is validated by a
rigorous testing framework utilizing a 10-fold cross-validation method to ensure
a comprehensive evaluation of support vector machine (SVM) classifier. Several
parameters, such as, AUC and G-mean metrics were used to ensure the accuracy,
robustness and effectiveness of the proposed technique comparing to other
traditional and machine learing-based methods. We experimented with highly
imbalanced dataset from KEEL repository. The proposed approach outperformed the
standard SMOTE and RC-SMOTE, proven the effectiveness of the proposed approach
of filtering imbalance data in improving the generalizability and performance of
machine learning classifiers. |
Keywords: |
SMOTE, Imbalance, Filtering, G-mean, Machine Learning |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
OPTIMIZED GRID LINKED PV SYSTEM WITH ZETA CONVERTER FOR ENHANCED EFFICIENCY |
Author: |
SRINIVASA RAO JALLURI , GORU RADHIKA , SHASHIDHAR KASTHALA , G RAMAKRISHNA4,
NITHYA CHANDRAN |
Abstract: |
The need for efficient converters is essential for the incorporation of solar PV
systems with grid infrastruc-ture to control voltage and maximize energy
transmission. Using a Zeta converter and the Incremental Conductance (INC) MPPT
algorithm, this study explores a grid-linked photovoltaic scheme. The Zeta
con-verter plays a crucial role in stabilizing DC link voltage despite
fluctuating solar irradiance, considering a consistent output between 290V and
600V.Through dynamic tracking and maintenance of the PV array's MPP, the INC
MPPT approach improves system efficiency. In addition to optimizing energy
extraction, this integrated strategy guarantees steady power transmission to the
grid, reducing fluctuations and en-hancing overall performance. The proposed
scheme demonstrates significant improvements in voltage regulation, energy
efficiency, and reliability, making it a viable solution for modern renewable
energy ap-plications.However, designing an optimal control strategy for the zeta
converter is essential to balance the efficiency, transient response and voltage
stability. Its ability to regulate voltage effectively enhances the performance
of photovoltaic systems, preventing fluctuations from affecting downstream
components. Proposed Simulation outcomes provide the efficacy of the proposed
system under dynamic operating conditions, highlighting its effective power
transfer. This study provides a comprehensive framework for implementing Zeta
converters in Grid-linked PV schemes, bridging the gap between renewable energy
gen-eration and grid stability. |
Keywords: |
Grid-Linked PV Scheme, Zeta Converter, DC Link Voltage, Voltage Regulation,
Renewable Energy |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
NEXT GEN BUSINESS INTELLIGENCE: LEVARAGING PREDICTIVE ANALYTICS, AI & REAL-TIME
DECISION- MAKING |
Author: |
AJAI GOPAL BHARTARIYA , S. K. SINGH , AJAY KUMAR BHARTI |
Abstract: |
In today’s dynamic business landscape, the ability to harness data effectively
has become a cornerstone of competitive advantage. Business Intelligence (BI) is
evolving rapidly, integrating cutting-edge technologies like predictive
analytics, artificial intelligence (AI), and real-time data processing to
deliver actionable insights. Predictive modeling plays a pivotal role in modern
data-driven decision-making, offering valuable insights into future trends and
behaviors across various data sets. This paper investigates the performance of
prevalent machine learning models for predictive analytics, with a particular
focus on XGBoost. We demonstrate how hyperparameter tuning can significantly
enhance the accuracy prediction of XGBoost comparing its performance against
other popular models such as decision trees, random forests, and logistic
regression. Through a systematic evaluation, we highlighted the effectiveness of
XGBoost in capturing complex patterns within the data, resulting in superior
predictive performance. Our findings provide evidence that XGBoost, when
appropriately tuned, outperforms traditional models, offering a more robust
approach for predictive analytics. This research contributes to the ongoing
discourse on the application of machine learning techniques, providing practical
insights for researchers and practitioners aiming to improve predictive modeling
accuracy. |
Keywords: |
Artificial Intelligence (AI), XGBoost, Random Forest, GridSearch, Business
Intelligence (BI), Machine Learning (ML), Data, NLP. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ENHANCED PREDICTIVE MODELING FOR ALZHEIMER’S DISEASE: INTEGRATING CLUSTER-BASED
BOOSTING AND GRADIENT TECHNIQUES WITH OPTIMIZED FEATURE SELECTION |
Author: |
S PHANI PRAVEEN , SREEDHAR BHUKYA , SHARMILA VALLEM , SATEESH GORIKAPUDI , KIRAN
KUMAR REDDY PENUBAKA , VAHIDUDDIN SHARIFF |
Abstract: |
The early signs of Alzheimer’s disease (AD) prove difficult to detect since this
progressive illness has an ongoing character. The correct prediction of how the
disease progresses represents a critical need for providing prompt treatment
that enables effective management of the disease. The research develops
Cluster-Integrated Boosting and Gradient (CIBG) as a stacked ensemble which
combines CatBoost with Gradient Boosting Machine (GBM) classifiers to boost AD
diagnostic precision. Kaggle provided the dataset containing diverse
health-related characteristics together with lifestyle characteristics and
cognitive data. The data preprocessing steps included handling missing values
and normalization procedures coupled with SMOTE implementation to solve class
imbalance problems. The feature selection process involved using Recursive
Feature Elimination (RFE) which determined the most important predictive
variables. The CIBG model uses CatBoost’s ordered boosting method to manage
categorical features together with GBM’s gradient boosting structure that
performs classification. Within CIBG the base classifier chooses logistic
regression. Model performance evaluation included accuracy together with
precision, recall, and F1-score metrics which reached an optimal rate of 96.8%.
The CIBG approach produces results that show better performance than standard
classifiers while maintaining stronger robustness and improved interpretability
capabilities. The research shows how hybrid machine learning methods help
enhance early diagnosis of AD and develops medical systems that are dependable
for clinical use. |
Keywords: |
Recursive Feature Elimination, CIBG, Ensemble Learning Models, SMOTE, Feature
Selection, CatBoost, GBM. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
DATA SEGMENTATION USING MIXTURE REGRESSION MODELS WITH GENERALIZED GAUSSIAN
DISTRIBUTION AND K-MEANS |
Author: |
S A V S SAMBHA MURTHY S , K. SRINIVASA RAO , KUNJAM NAGESWAR RAO |
Abstract: |
Data segmentation using mixture regression models gained lot of momentum due to
their ready applicability in market analytics, business analytics, financial
analytics, supply chain analytics , Human Resource analytics etc. In regression
analysis it is customary to assume that error term follows a Gaussian
distribution. Gaussian distribution has several drawbacks such as being
mesokurtic and the model may not serve well for all types of data. Hence, in
this paper we develop data segmentation method using mixture of regression
models with Generalized Gaussian Distributed (GGD) errors. The GGD includes
leptokurtic, platykurtic and Gaussian distribution as particular cases. The
model parameters are estimated using Expectation Maximization (EM) algorithm.
The initialization of the parameters is done by using k-means algorithm. The
data segmentation algorithm is derived using component maximum likelihood under
Bayesian framework. The utility of the proposed algorithm is demonstrated with
market segmentation. The performance of the algorithm is evaluated by computing
segmentation performance metrics such as accuracy, misclassification rate,
precision. It is observed that this method performs much better than the earlier
data segmentation methods having Gaussian distributed errors for the data sets
having leptokurtic and platykurtic response variables. |
Keywords: |
Segmentation Methods, Regression Analysis, Generalized Gaussian distribution,
Market Analytics, Expectation and Maximization Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ENHANCING CRIME DATA ANALYSIS THROUGH WORD-VECTOR CONVERSION: A RESNIK-GLOVE
APPROACH |
Author: |
SAJNA MOL H S , GLADSTON RAJ S |
Abstract: |
Crime data analysis is being performed now according to the requirement of
Natural Language Processing (NLP) and machine learning algorithms that are able
to extract insights from vast amounts of unstructured text data. There have been
several efforts to apply classical word embedding models such as Glove, BERT,
Skip-Gram, and Bag of Words, but they never identify the explicit meaning of
crime tapes. This paper presents a novel hybrid method in which Resnik's
similarity measure is combined with the GloVe algorithm to improve the semantic
representation of text crime data. Our method employs the Resnik-GloVe model to
generate word vectors for crime descriptions and addresses that both capture
global word co-occurrence as well as semantic similarity. An Entropy swish based
convolutional dense neural network (ES-CDNN) is learned to enhance classifier
accuracy by incorporating them. The experiments performed with the 2016 San
Francisco crime dataset validate that the Resnik-GloVe method actually generates
an outstanding result by obtaining 98.33% accuracy whereas GloVe achieved
96.25%, BERT 94.26%, Skip-Gram 92.15%, and Bag of Words reached as low as
90.66%. The suggested methodology utilizes improved classification of crime data
and helps law enforcement officers and policymakers analyze crime patterns more
effectively. The research contributes to the area of crime analytics by
addressing important shortcomings in existing word embedding methods and
demonstrating the real benefits of integrating semantic similarity measures with
conventional NLP models. |
Keywords: |
Crime Data Analysis, Word-To-Vector Conversion, Resnik Glove, Textual
Information Representation, Word Embedding Algorithms. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ENHANCED SKIN LESION CLASSIFICATION USING DEEP LEARNING MODEL IN INTERNET OF
MEDICAL THINGS |
Author: |
SHANKARA CHIKKALINGAIAH, MAHADEVI KONANAHALLI CHUNCHAIAH , ASHWINI KERAGODU
SHIVALINGASWAMY |
Abstract: |
The recent development of the Internet of Medical Things (IoMT) has greatly
benefited medical professionals, patients and physicians in accessing medical
information. This information helps to detect and recognize the diseases in the
patients through medical images. However, the automatic detection and
classification of diseases in medical images like skin lesions, brain tumor,
breast cancer, etc., are still challenging in IoMT due to image quality and
irrelevant features. To solve this issue, a Logistic Chaotic Map based Red Panda
Optimization algorithm (LCM-RPO) is proposed for the feature selection process
to select the significant features. Additionally, a Long Short–Term Memory
(LSTM) model is utilised to classify the medical images effectively by learning
the sequential information. Initially, the medical images are acquired and then
raw images are preprocessed to enhance the data quality for further processing.
The DenseNet-169 based feature extraction model extracts the most important
features from the diseased portion. The experimental results of the proposed
LCM-RPO-LSTM model attained an accuracy of 95.40% and 0.973 for Ph2 and
ISIC-2016 datasets, which is higher than existing classification approaches such
as PLDG and SVM-IARO. |
Keywords: |
DenseNet-169, Internet Of Medical Things, Logistic Chaotic Mapping Based Red
Panda Optimization Algorithm, Long-Short –Term Memory, Medical Image
Classification |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
GAN-OST: GENERATIVE ADVERSARIAL NETWORKS FOR PRECISION SEGMENTATION AND
SYNTHETIC AUGMENTATION OF OSTEOSARCOMA TUMOURS IN MEDICAL IMAGING |
Author: |
GONDI HEMAMRUTHA , NAGA MALLESWARI DUBBA |
Abstract: |
Osteosarcoma is an extremely malignant bone cancer that the accurate
segmentation of tumour is important for effective diagnosis and treatment
planning. One of the primary challenges facing segmentation models is the
paucity of human annotated medical imaging data. In this paper, we introduce a
generative adversarial network (GAN) -based method for accurate segmentation and
synthesis augmentation of osteosarcoma tumours in medical imaging — GAN-OST. The
suggested approach utilizes a U-Net styled generator to achieve precise tumour
segmentation and PatchGAN-based discriminator for creation of high-quality
synthetic images. To make up for the lack of training data, GAN-OST adopts a
conditional GAN to create realistic synthetic tumour images that are difficult
to distinguish from real tumours and thus can be used as a supplement for model
training so as to improve the generalization of the model. We will evaluate the
performance of our model with Dice Coefficient, Intersection over Union (IoU),
Sensitivity, Structural Similarity Index (SSIM) and Fréchet Inception Distance
(FID). We evaluate our model on these metrics to achieve a comprehensive
evaluation of segmentation accuracy and image quality. Experimental results on
publicly available osteosarcoma datasets demonstrate the superior performance of
GAN-OST relative to traditional segmentation approaches, including a remarkable
increase in both segmentation precision and generalization. Additionally, the
synthetic data created by GAN-OST successfully compensates for the lack of data
in small sets and allows reliable acting together data laden proposal. This work
demonstrates the advancement of osteosarcoma tumour segmentation and provides a
network used for data augmentation that could greatly help other rare cancer
types and multimodal imaging scenarios in future research. |
Keywords: |
Data Augmentation, Generative Adversarial Networks (GANs), Medical Imaging,
Osteosarcoma, Segmentation, Synthetic Data. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
INVESTIGATION AND COMPARISON OF FEATURE SELECTION TECHNIQUES WITH HYPER
PARAMETER TUNING FOR PREDICTING PROACTIVE CARDIOVASCULAR DISEASE |
Author: |
JANGAM RAGHUNATH , S KIRAN |
Abstract: |
Despite efforts to treat cardiovascular disease (CVD), yet it remains, still,
one of the major cause of death; hence, there has been a need for developing
accurate and swift predictive models to enable early diagnosis and intervention
for such patients. This study analyzes and considers several feature selection
models in the presence of hyperparameter tuning to improve CVD prediction
models. Feature selection is important in improving model’s interpretability,
reduce computational complexity and remove redundant or irrelevant feature.
Additionally, which of the filter, wrapper, and embedded methods (e.g. Mutual
Information, Recursive Feature Elimination, and LASSO regression) are best for
working with CVD dataset are considered. In order to further improve the model
performance, Hyper Parameters tuning on machine learning classifier like
Logistic Regression, Support Vector Machine, Random Forest and XGBoost using
Grid Search and Bayesian optimization are applied. Different feature selection
and hyper parameter tuning combination are assessed based on its performance
metric including accuracy, recall, precision, F1-score and area under receiver
operating characteristic (ROC-AUC) curve. The proposed method has attained 89%
which is far better than the models like Logistic Regression (LR) of 85%,
Support Vector Machine (SVM) of 81%, K-Nearest Neighbors (KNN) of 86.9%,
Artificial neural networks (ANN) of 87%. In particular the proposed method is
4.7% better than LR, 9.8% better than SVM, 2.29% better than KNN and 2.3% better
than ANN. The results from experiment show that the performance of model can be
improved greatly with the complement of XGBoost and Random Forest when feature
selection integrated with optimized hyperparameter tuning. Modern IT
methodologies boost efficiency of models by using fewer resources while
achieving extended operational life. Feature selection integration enhances
diagnostics predictions because IT-based analytics plays an important role in
medical diagnosis. |
Keywords: |
Cardiovascular Disease, Feature Selection, Hyper-parameter Tuning, Machine
Learning, Predictive Modeling. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
THE MAPPING AND IDENTIFICATION MODEL OF IT TECHNOPRENEURSHIPS STUDENT POTENTIAL
USING DATA MINING AND ARTIFICIAL INTELLIGENCE |
Author: |
WIJI LESTARI, SINGGIH PURNOMO, INDRA HASTUTI, SRI SUMARLINDA |
Abstract: |
The Information Technology (IT) Technopreneurships are growing very rapidly in
line with technological increasing. This study aims to produce a model for
mapping and identifying of student potential in IT (information technology)
technopreneurships using intelligent computing. This mapping is based on the
existence of 4 areas of IT technopreneurship, namely Software Application
developer, Data Analyst, Computer System & Network engineer and Multinedia&
Graphics Developer. The indicators used are: interesting, entrepreneurial
values, learning styles (Solitary, Logical, Aural, Social, Verbal, visual and
Physical) and multiple intelligences (Visual Spatial, Logical Mathematics, Body
Kinesthetics, Naturalist, Musical, Linguistics, Interpersonal and Intrapersonal.
This study used research and development and action research approach methods.
The development of an intelligent computing system used the prototype method.
The resulted of internal testing of running well models with cyclometic
complexity was lower. The novel model that has been produced is a mapping based
on students personal cognitive characteristics for IT technopreneurship based on
intelligent computing. User acceptance Test resulted 96%, Expertise judgment
Test (Computing / IT) yields 98% and Expertise Judgment Test (Entrepreneurships
/ Technopreneurships) yields 95%. The result of model validation of 20 IT
technopreneurships was precise 85%. The resulted of the implementation of the
model resulted in 60 students of Software Application developer, 65 students of
Data Analysis, 56 students of Computer Systems & Network engineers, 50 students
of Multinedia& Graphics Developer and 24 students not suitable to become IT
technopreneurs. |
Keywords: |
Mapping And Identification Model, Students Potential, IT Technopreneurships,
Intelligent Computing, Personal Characteristics |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
PROTOTYPING OF A TWO-WHEELED MOBILE ROBOT FOR SUSTAINABLE MANUFACTURING
DEVELOPMENT BASED ON TRIANGULATION METHOD AND SOFTWARE DEVELOPMENT |
Author: |
MOHAMMAD HAMDAN , ISRAA WAHBI KAMAL , AMER ABU-JASSAR , SVITLANA MAKSYMOVA ,
VYACHESLAV LYASHENKO |
Abstract: |
This article presents a mobile robot prototyping and software development for
its control production. A created robot prototype moves using a two-wheeled base
and is equipped with ultrasonic sensors. This design allows increasing the
maneuverability of the robot and reducing the turning radius. The specific
arrangement of the sensors allows expanding the area of obstacle detection. The
developed software includes receiving data from the sensors, processing them and
constructing a movement trajectory in accordance with the target and the current
environment state. The trajectory is constructed using the triangulation method.
This allows accurately determining the distance to the obstacle, constructing a
rational movement trajectory and going around obstacles that may arise on the
path of the robot. A number of experiments were conducted that allow us to say
that the obstacle detection range is sufficient for a timely response and change
the trajectory of movement. |
Keywords: |
Mobile Robot, Triangulation, Ultrasonic Sensor, Two-Wheeled Robot, Software for
Mobile Robot |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
EVALUATION OF ADHD: CLASSIFICATION, TREATMENT STRATEGIES IN PEDIATRIC AND
ADOLESCENT POPULATIONS USING MACHINE LEARNING |
Author: |
M GEETHA PRIYA , P KALYANARAMAN |
Abstract: |
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder
characterized by inattention, impulsivity, and hyperactivity. Despite extensive
research, existing diagnostic approaches rely heavily on subjective clinical
assessments, leading to misdiagnoses and delayed interventions. Furthermore,
conventional treatment strategies lack personalization, often resulting in
suboptimal therapeutic outcomes. This study addresses these gaps by leveraging
machine learning techniques to enhance ADHD classification and treatment
strategies, specifically for pediatric and adolescent populations. Unlike
previous studies that primarily focus on symptom-based categorization, our
research integrates multi-modal data, including neuroimaging, behavioral
assessments, and genetic markers, to improve diagnostic accuracy. A novel hybrid
machine learning model is proposed, incorporating convolutional autoencoders and
advanced neural architectures to extract discriminative features for precise
classification. Additionally, the study explores AI-driven personalized
treatment recommendations, optimizing intervention strategies based on
patient-specific patterns. The findings of this research contribute to the
development of an objective, data-driven framework for ADHD diagnosis and
treatment, reducing reliance on subjective evaluation. This work establishes a
foundation for future AI-assisted clinical decision-making, ultimately improving
patient outcomes and advancing ADHD research. |
Keywords: |
ADHD, Brainwave, CNN, Cognitive, Disorder |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ENHANCING STOCK MARKET INVESTMENT DECISIONS THROUGH BLOCKCHAIN TRANSACTION
SECURITY: A STUDY ON INVESTOR INTENTIONS |
Author: |
RAYMOND HARYADI, ELFINDAH PRINCES |
Abstract: |
This study analyzes the effect of transaction security using Blockchain
technology on customers' decision to invest in the stock market. Blockchain is
known as a revolutionary technology that offers high security through
decentralization, transparency, and immutability. In the context of investment,
this technology has the potential to reduce the risk of data manipulation,
fraud, and information leakage that are often a challenge in online investment.A
quantitative approach is used with the Theory of Planned Behavior (TPB)
framework to measure the influence of variables such as attitude towards money,
subjective norms, perceived behavioral control, and Blockchain transaction
security on investment intentions. The sample consisted of 460 respondents who
had experience or interest in investing in listed companies. Data was collected
through a structured questionnaire and analyzed using Smart PLS to validate the
research model.The results show that Blockchain-based transaction security, such
as cryptographic encryption, smart contracts, and transparent transaction
recording, has a significant influence on investment intention by increasing
investor confidence. Psychological factors, such as attitudes toward money and
subjective norms, also strengthen investment intentions, reflecting the
importance of perceived security and social support. This research makes a
practical contribution for companies to adopt Blockchain as a strategic move to
increase investor confidence. This technology can be the basis for creating a
safe and sustainable investment ecosystem. In addition, this research enriches
the academic literature on the role of Blockchain in investment decisions and
recommends better investment security standards for regulators. |
Keywords: |
Blockchain, Transaction Security, Stockmarket, Theory Of Planned Behavior (TPB),
Intention To Invest, Transparency, Risk Of Fraud, Behavior Control, Subjective
Norms. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
MULTI-LEVEL IMAGE DENOISING INTEGRATED MORPHOLOGY-BASED IMAGE QUALITY
ENHANCEMENT MODEL WITH EDGE-BASED SEGMENTATION FOR ACCURATE PANCREATIC CANCER
DETECTION |
Author: |
SRIPATHI CHAITANYA BHARATHI , ESWARAIAH RAYACHOTI |
Abstract: |
Despite the availability of various methods, diagnosing and treating Pancreatic
Cancer (PC) continues as a significant challenge across all types of tumours
with its asymptomatic development. Among the most devastating diseases,
pancreatic cancer has claimed the lives of countless people around the globe.
Traditional methods of diagnosis relied on manual analysis of massive datasets,
which was laborious, error-prone, and time-consuming. Therefore, CADs, which use
machine learning and deep learning techniques for pancreatic cancer denoising,
segmentation, and classification, become necessary. However, there are obstacles
to medical image analysis of pancreatic cancer owing to vague symptoms, high
rates of misdiagnosis, and substantial monetary expenses. A potential answer is
Artificial Intelligence (AI), which can reduce patient expenditures, improve
clinical decision-making, and alleviate the workload of medical workers. The use
of medical imaging scans has allowed many cancer patients to detect anomalies
earlier on. This research makes use of Computed Tomography (CT) images for
performing image denoising and segmentation. The high price of the required
equipment and infrastructure makes it difficult to spread the technology, which
means that many people cannot afford it. Pancreatic cancer detection using
medical image analysis is greatly impeded by noisy, low-quality images that mask
important diagnostic details and lower detection accuracy. This research tackles
the important problem of creating a state-of-the-art image processing method
that can reliably and effectively denoise medical images, improve image quality
using morphological techniques, and apply precise edge-based segmentation to
better visualize and detect pancreatic cancer early on. In this research, a new
Multi-Level Image Denoising and Integrated Morphology based Image Quality
Enhancement Model (MIDIMIQEM) by edge-based segmentation for precise pancreatic
cancer detection is proposed. This proposed model solve these problems in
medical image analysis by introducing multi level denoising technique and also
morphology-based enhancement with a robust edge based segmentation for an
enhanced diagnostic accuracy. The model uses wavelet based multilevel image
denoising for the removal of noise and thereafter morphological operations are
employed to improve the contrast which helps in easier differentiable
identification of tumor structures. The segmented regions are further improved
by the edge detection techniques, which increases the chances of self and
accurate determination of cancerous tissues. Experimental evaluation on
MIDIMIQEM demonstrates that the image quality, segmentation accuracy and small
nodules detection performance are significantly improved compared with
state-of-the-art models. The proposed model achieved 98.7% accuracy in
Multi-Level Image Denoising and 99.3% accuracy in Segmentation Accuracy. This
novel technique is expected to help radiologists make an early and accurate
diagnosis that could improve patient outcomes. With the use of cutting-edge
edge-based morphological processing methods, the suggested multi-level picture
denoising and morphology-based enhancement model outperforms state-of-the-art
methods for pancreatic cancer image segmentation, greatly enhancing diagnostic
precision and opening up new avenues for early detection. |
Keywords: |
Image Denoising, Morphology-Based Enhancement, Pancreatic Cancer Detection,
Edge-Based Segmentation, Medical Image Analysis, Wavelet Transform, Tumor
Localization. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ENHANCING INTER-DOMAIN ROUTING POLICY AUTOMATION WITH BLOCKCHAIN AND SMART
CONTRACTS |
Author: |
SANKARA MAHALINGAM M, N. SURESH KUMAR , R. KANNIGA DEVI |
Abstract: |
The Border Gateway Protocol (BGP), which manages inter-domain routing, undergoes
frequent changes. This requires strong policy management to keep data transfer
across different networks (called Autonomous Systems, or ASes) safe and
efficient. Traditional methods usually require physical setup, which can cause
mistakes and security risks. This paper looks at how blockchain technology and
smart contracts can work together to make inter-domain routing rules automatic.
We propose a framework called the RPAChain (Routing Policy Automation Chain),
which enhances trust, transparency, and automation in routing policy management
by utilising the decentralised and immutable ledgers of blockchain technology
along with the self-executing capabilities of smart contracts. We explain the
system's working details, including the automatic routing processes. We then
examine the results and their implications for future routing between various
domains. |
Keywords: |
Inter-Domain Routing, Border Gateway Protocol (BGP), Blockchain, Smart
Contracts, Autonomous Systems (ASes), Routing Policy Automation, Network
Security. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
AI-POWERED EMPATHY: SENTIMENT ANALYSIS IN PERSONAL CARE USING RoBERTa AND XLNet |
Author: |
DR. REVATHI DURGAM , NARENDRA BABU PAMULA , NALLANI DHARANI ,DR. B V N SIVA
KUMAR ,DR. PUTTA DURGA , VELCHURI BALAJI |
Abstract: |
Since they reveal vital information about consumer preferences and satisfaction,
personal care products in the current digital era depend especially on user
reviews. The intricacy and diversity of natural language make large-scale
evaluation of these tests challenging. This study is motivated by the need of
the personal care industry to discover how sentiment analysis grounded on
Natural Language Processing (NLP) techniques could be useful. We want to use NLP
models to quickly and more precisely detect consumer sentiment, so guiding
classification. From this, companies can choose a lot of knowledge on consumer
attitude and product performance. Covered include how to select appropriate
models, compile sentiment analysis review data, and how sentiment classification
influences marketing and product strategy. In the increasingly competitive
personal care industry, natural language processing (NLP) can help to streamline
user input thereby facilitating data-driven decisions and greater consumer
satisfaction. People are realizing the full possibilities of Advanced
Technology, which marks a period of perpetual and never-ending advancement. In
the field of shopping and virtually purchasing goods using technology dubious
issues of product quality surface. Reviews give us direct answers and allow us
to sort through our questions. People are asked to post reviews, in which they
provide honest and forthright comments, following purchases of goods. Our
objective is to use Sentimental Analysis with NLP to classify the past reviews
of the product as either favourable or negative with accuracy of 94%. Customers
as well as the product company will benefit from this in order to make
appropriate judgments and apply correct modifications. |
Keywords: |
Natural Language Processing (NLP), Sentiment Analysis, Customer Reviews,
Personal Care Products, Opinion Mining, Text Classification, Emotion Detection,
Product Feedback Analysis, Consumer Insights |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
DEVELOPMENT OF NEW MACHINE LEARNING ALGORITHM FOR CUCUMBER AND GRAPE LEAF
DISEASE DETECTION USING MULTI SVM WITH CUSTOM KERNELS |
Author: |
C.NANCY , DR.S.KIRAN |
Abstract: |
In Agricultural production crop disease is a wide spread problem, it effects the
productivity and quality of crops. Cucumbers are high water content vegetables
and many minerals and vitamins, But Cucumbers are susceptible to several
diseases. In wine production grapes are main ingredient, but these are severely
attack by brown spot, mites, Anthracnose, downy mildew, black rot, and leaf
blight. Identification and detection of crop diseases are at most care for
reducing economic loss for the formers also improve productivity. But manual
detection takes huge amount of time, sometimes it gives inaccurate results. To
overcome these challenges by using advanced artificial intelligence techniques
including Alex net, VGG-16,U-Net,ResNet,VGG-19,Fine KNN, Random forest
Algorithms, YOLO v5.But these algorithms struggle with irrelevant features,
noise and poor performance. To handle these we proposed novel approach using
Multi SVM with Custom SVM kernels along with the GLCM features. The primary
objective of the new algorithm is to enhance the model’s performance and
increase the efficiency of disease detection. Our datasets cucumber and grapes
collected from Kaggle and directly from cucumber farms, it contains four
different categories: Healthy, Powdery mildew, Downy mildew and Target Leaf
Spot. First phase is to apply image augmentation technique for enhancing the
image. Second affected area is calculated using K-Means Algorithm. Next from the
images GLCM features are extracted. Subsequently classification is performed
using Multi SVM algorithm with Custom SVM kernels. Our Novel Approach achieved
best identification result with mAP is 90.62% compare with YOLO v5M mAP is 84.6. |
Keywords: |
Random Forest Algorithms, YOLO v5, GLCM, mAP, MultiSVM. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
SURVEY ON SOFTWARE ARCHITECTURE FOR QUANTUM COMPUTING: DESIGN PRINCIPLES,
CHALLENGES, AND FUTURE DIRECTIONS |
Author: |
KHALIL JAMOUS |
Abstract: |
Quantum computing is imagined as the route to moving the current paradigm for
problem solving that could not, under the old paradigm, be solved within a
feasible period of time. In spite of these promises, it needs to fulfill all of
these by demonstrating the proper architecture upon which such processing
challenges that the quantum computer hardware dictates could be mounted. This
paper revisits these very basics forming the pivot of design quantum software
architecture and is guided by principles relating to modularity, hybridization,
scalability, and fault tolerance. A focus on these, but putting much emphasis on
modularity and hybridization on issues relating to handling the complexity or
optimizing the system's performance. The paper also highlights some major
challenges in the development of quantum software: hardware constraining, lack
of standardization, and the need for effective fault tolerance mechanisms. The
survey will provide comparisons between existing frameworks like Qiskit, Cirq,
and Amazon Braket to carve out interoperability gaps and platform
specialization. It also discusses some of the future directions, including
hybrid architectures, integration of machine learning, standardization efforts,
and co-design of hardware-software systems that will be essential to drive
quantum computing toward scalability and widespread adoption. The work is a rich
source for both the researcher and the practitioner by setting out an overview
of the current status and offering a vision as to how to create the next
generation quantum software architectures, which will be efficient, scalable,
and prepared to respond to diverse application demands. |
Keywords: |
Quantum Computing, Software Architecture, Design Principles, Challenges, Future
Directions. |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
A NEW PARADIGM IN FRAUD DETECTION: LEVERAGING SOCIAL MEDIA TO PREDICT MCCS IN
REAL TIME |
Author: |
MAROUANE AIT SAID , ABDELMAJID HAJAMI , AYOUB KRARI |
Abstract: |
Card-not-present (CNP) fraud presents a persistent challenge in the financial
industry, costing institutions billions annually. Traditional fraud detection
models rely heavily on transaction history and rule-based mechanisms, making
them reactive rather than proactive. These systems struggle to detect emerging
fraud patterns that evolve beyond historical transaction-based limitations. In
this study, we introduce an innovative fraud detection framework that leverages
social media analytics to predict a cardholder’s likely future Merchant Category
Code (MCC), providing a proactive layer of fraud prevention. Our approach
integrates natural language processing (NLP) techniques with machine learning to
analyze publicly available social media posts from business pages linked to
specific MCC categories. By transforming unstructured text data using Term
Frequency-Inverse Document Frequency (TF-IDF) vectorization, we extract relevant
linguistic features and apply a Multinomial Naïve Bayes classifier to predict
MCCs with high accuracy. To ensure model scalability and robustness, we expanded
our dataset from 120,000 to over three million records using data amplification
techniques. Our model achieves an accuracy of 99.1% through cross-validation,
significantly improving real-time MCC prediction for transaction authentication.
Unlike conventional fraud detection systems that react only to anomalies in past
spending behavior, our method proactively forecasts spending categories based on
dynamic social media interactions. By comparing the predicted MCCs with those in
real-time transactions, financial institutions can identify potential fraudulent
activities before they occur. This novel integration of behavioral insights from
social media into fraud detection enhances the adaptability of fraud prevention
systems, bridging the gap between static transaction monitoring and real-time
fraud anticipation. This research demonstrates that incorporating social media
discourse into financial security mechanisms can revolutionize CNP fraud
detection. Future work will focus on expanding the model to accommodate
multilingual data and additional MCC categories, ensuring its applicability
across diverse consumer markets. |
Keywords: |
Card-Not-Present Fraud, Machine Learning, Merchant Category Codes (MCC),
Multinomial Naive Bayes Classifiers, Natural Language Processing (NLP),
Predictive Analytics, Proactive Fraud Prevention, Real-Time Fraud Detection,
Merchant Category Code (MCC). |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
USING VIDEO TITLES TO PREDICT YOUTUBE AUDIENCE BEHAVIOR BASED ON MACHINE
LEARNING |
Author: |
CHIH-CHIEN WANG , YI-FENG LIN , YA-CHEN HSIEH , YU-HAN KAO |
Abstract: |
As the leading video-sharing platform, YouTube has evolved into a new form of
media, facilitating content dissemination by creators and fostering active
audience participation. Enhanced viewer engagement offers creators increased
influence and financial benefits due to heightened popularity. However, the vast
quantity of videos available on YouTube means that only a select few manage to
captivate significant audience attention. Within YouTube's interface, video
titles play a crucial role in attracting users, making the exploration of the
relationship between audience engagement and video titles an important research
topic. This study aims to predict audience engagement by analyzing video titles
using various machine learning methods. The study employs the Linguistic Inquiry
and Word Count (LIWC) software for natural language processing, which calculates
the frequency of word categories in the text, such as pronouns, emotional, and
cognitive words, presenting these frequencies as relative percentages. Results
indicate that different textual categories within video titles correlate
significantly with audience liking and commenting behaviors. Among the evaluated
machine learning techniques, Random Forest and K-Nearest Neighbors Regression
models exhibit superior predictive performance. The findings provide insights
into the intricate interplay between textual features of video titles and their
impact on audience engagement. This research serves as a valuable reference for
video creators, guiding their decision-making process concerning video titles to
optimize audience interaction. Additionally, the study contributes to existing
literature by elucidating the nuanced relationship between textual elements in
video titles and audience responsiveness. |
Keywords: |
Video titles, Linguistic Inquiry and Word Count (LIWC), Machine learning,
Audience behaviors |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
A NOVEL ENTROPY-BASED CASCADED CAPSULE NEURAL NETWORK WITH AN OPTIMIZED LSTM FOR
ANOMALY SEGMENTATION AND CLASSIFICATION |
Author: |
SHAMEEM AKTHAR K, DR. K. LAKSHMI PRIYA |
Abstract: |
The topic at hand of Anomaly Detection (AD) is important and well-researched.
But creating successful AD techniques for complex and high-dimensional data is
still difficult. There exist various methods for the detection of anomalies.
Though, there are various existing methods, to improve the efficiency and to
bring more advantages a method called Entropy based cascaded capsule neural
network (CCNN) with an optimized LSTM (OLSTM) for segmentation and
classification is suggested for the AD in the videos. Initially, pre-processing
is done using the Gaussian filter followed by the segmentation using the
Cascaded Capsule Neural network. Using this segmented data along with the
dataset classified using entropy, Feature extraction (FE) is done by which the
features are extracted. Finally, classification is done using the Optimized LSTM
using ROCO algorithm method. The suggested approach has an accuracy of 98%,
specificity of 99.8%, sensitivity of 92.9%, Precision of 92.9, F measure of
92.9%, FNR of 10%, FPR of 0.3%and Matthew Correlation Coefficient (MCC) of 89%.
Thus, from the results, it is seen that our suggested approach performed more
effectively than the other methods that are currently in use. |
Keywords: |
Anomaly Detection, Entropy, CCNN, OLSTM, ROCO |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
Title: |
ADAPTIVE TRAFFIC SIGNAL CONTROL USING AI AND DISTRIBUTED SYSTEMS FOR SMART URBAN
MOBILITY |
Author: |
ISMAIL ZRIGUI , SAMIRA KHOULJI , MOHAMED LARBI KERKEB |
Abstract: |
Urban traffic jams remain a pressing issue due to increasing vehicle
concentrations, inadaptive traffic signal management, and unreliable road
conditions. Fixed or semi-adaptive traditional traffic control systems cannot
adaptively change in real-time traffic environments, leading to wasted time,
excessive fuel usage, and greater pollutant emissions. This study proposes a
comprehensive framework for the optimization of urban traffic, leveraging
predictive modeling, adaptive signal control, and distributed messaging
infrastructure. The platform collects real-time sensor information, employs
machine learning techniques for short-term traffic prediction, and optimizes
signal timing through heuristic approaches such as Simulated Annealing (SA) and
Reinforcement Learning (RL). The asynchronous messaging architecture supports
flexible communication among prediction modules, controllers, and external
services to facilitate flexibility and scalability. Experimental validation
was conducted using the SUMO traffic simulator, where a standard fixed-cycle
signal scheme was compared to the proposed adaptive scheme. Results indicate
that there is a 21.6% reduction in average waiting time, a 12.9% reduction in
CO₂ emissions, and a 15.6% improvement in fuel efficiency. These findings
validate the effectiveness of the system in mitigating congestion and enabling
green urban mobility. Future work will involve incorporating
vehicle-to-everything communication data (V2X communication), multi-intersection
coordination, and deep learning traffic flow predictions in order to better
enhance adaptability and robustness at the large-scale deployment stage. |
Keywords: |
Intelligent Transportation Systems, Traffic Optimization, Adaptive Signal
Control, Predictive Modeling, Reinforcement Learning, Smart Cities |
Source: |
Journal of Theoretical and Applied Information Technology
30th April 2025 -- Vol. 103. No. 8-- 2025 |
Full
Text |
|
|
|