|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
July 2025 | Vol. 103 No.14 |
Title: |
HYBRID OPTIMIZATION TECHNIQUES FOR MOBILITY-AWARE, ENERGY-EFFICIENT SMALL CELL
DEPLOYMENT IN 5G NETWORKS |
Author: |
S. VINODH KUMAR, A. VIJAYALAKSHMI, A. PACKIALATHA, B. EBENEZER ABISHEK |
Abstract: |
Expanding wireless communication networks is necessary to meet the growing
number of mobile devices and the demand for faster internet. One practical way
to increase network capacity and coverage in heavily populated regions is to
deploy tiny cells. Smaller cells require more energy, increasing operating costs
and negatively affecting the environment. Traditional deployment approaches
ignore user mobility, despite its substantial impact on network performance. We
present a strategy for microcell deployment in 5G networks, utilizing hybrid
optimization techniques to address issues related to mobility awareness and
energy efficiency. The planned teenTo improve data transfer capacity and
increase user density in tiny cells, the suggested strategy clusters users using
a Modified Smell-Bees Optimization (MSBO) algorithm. This research introduces a
Gannet Optimal Induced Cuckoo Search (GOCS) approach to grouping microcells into
optimal locations while accounting for various design limitations. This book
lays out an Improved Coral Reef Optimization (ICRO) approach that takes
reliability criteria into account for better coral reef optimization. Measures
such as connection quality, user mobility, congestion rate, and mean time to
failure are part of these criteria. Assisting in the setup of compact base
stations is the goal of this plan. Simulations conducted in the Google Colab
environment greatly enhance important Quality of Service (QoS) measures. The
MSBO-GOCS-ICRO model is better than the well-known GSCP, TIPA, and ECM-BPSD
models in many ways. For example, it cuts convergence time by 49%, increases the
number of small base stations in use by 64%, and makes the network 154% more
energy efficient. These findings indicate that the suggested approach is the
optimal choice for the deployment of tiny cells in 5G networks. |
Keywords: |
5G networks, Small cell deployment, Hybrid optimization, Energy efficiency,
Mobility management, Quality of Service. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
PREDICTING FRAUD: A MACHINE LEARNING APPROACH TO SECURE TRANSACTIONS IN CREDIT
CARD SYSTEM |
Author: |
BASHAR I. HAMEED, MOHAMMED A. MOHAMMED, HUMAM K. YASEEN |
Abstract: |
The expansion of e-commerce has uncovered extensive vulnerabilities in web-based
transactions, creating opportunities. The enormous use of credit cards in online
transactions, motivated by their perks like discounts and bonuses, has resulted
in a substantial upward thrust in credit card fraud. Conventional strategies,
including hand checks and inspections, even as traditionally employed, have
proven to significant obstacles in identifying fraudulent actions due to their
time-intensive nature, high cost, and imprecision. The emergence of Artificial
Intelligence (AI), Machine Learning (ML), and Deep Learning (DL)-based
techniques provides a promising innovative solution for addressing fraudulent
activities with the aid of permitting the pattern recognition and anomaly
detection of financial transactions. Even with recent advances in research into
ML-based credit card fraud detection, the imbalance in credit transaction data
makes identifying fraudulent activities a challenging task. This paper presents
an advanced credit card fraud detection system using ML and DL algorithms;
however, it is very important to investigate the scenario of anomaly detection
concerning its characteristics. The paper analyzes a specific case study of the
credit card dataset, highlighting the important preliminary steps in creating
the necessary processes before the proposed model is applied. Our experimental
results demonstrate that the Logistic Regression model achieved superior
performance in evaluation metrics compared to the other models tested in our
experiment. |
Keywords: |
Fraudulent Financial Transactions, Credit Card, Fraud Detection, Machine
Learning, Deep Learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
BIG DATA ADOPTION IN INTERNATIONAL LOGISTICS MANAGEMENT |
Author: |
HASSAN ALI AL-ABABNEH, OLHA MAIBORODA, IHOR KUKIN, VADYM YAKOVETS, LIUBOV HRYNIV |
Abstract: |
The Big Data adoption increasingly affects the international logistics
management, thereby improving operational efficiency and competitiveness. The
aim of this study is to analyse the impact of the use of Big Data (BD) sets on
the logistics efficiency of companies in different countries for 2019–2023. The
study uses regression modelling and case studies together with econometric
analysis of companies to determine how technology infrastructure, regulatory
environment, market dynamics, operational performance, and cultural context
influence the causal relationship between BD and logistics efficiency. There is
limited research on how BD adoption interacts with contextual factors in global
logistics environments. The results demonstrate that deeper BD adoption is
associated with higher business logistics efficiency, and the effectiveness of
this interaction is moderated by the quality of the regulatory framework and
technological infrastructure. The study shows the significant influence of
cultural factors on organizational approaches to the BD use. The study also
emphasizes that companies must invest in new technologies and adapt to
regulatory changes to fully realize the benefits of BD opportunities. The
findings support the role of strategic management methods that incorporate BD
sets into logistics operations to make logistics efficient and innovative.
Further research may focus on changing the role of digitalization in logistics
and its implications for international supply chain management. It is also
appropriate to conduct sector-specific analysis for a deeper understanding by
policymakers and practitioners to improve logistics processes through
data-driven strategies. The article proposes a new concept of ‘intelligent
adaptive logistics (IAL)’ based on the integration of Big Data, artificial
intelligence, IoT and cognitive analytics to create self-learning logistics
systems. The purpose of the development of ‘intelligent adaptive logistics
(IAL)’ is to assess how the level of digital maturity of a company in the field
of IAL affects its logistics efficiency in the international context. For the
first time, the IAL_Adopt index is proposed, which comprehensively takes into
account not only the implementation of Big Data, but also the use of AI, IoT,
cognitive analytics and the level of adaptation to changes in the external
environment. The results of the panel regression analysis show that a high level
of IAL_Adopt significantly improves logistics efficiency, especially in
countries with high-quality infrastructure and a favourable regulatory
environment. Despite technological investment, companies in weak regulatory and
infrastructure contexts cannot fully benefit from BD. The IAL concept and the
IAL_Adopt index provide a scalable framework for assessing digital maturity and
planning strategic improvements in logistics. These findings confirm the
positive impact of integrated digital technologies on logistics performance
globally. The developed concept can serve as a basis for digital logistics
transformation strategies. |
Keywords: |
Big Data Sets, Logistics Efficiency, International Management, Econometric
Analysis, Technological Infrastructure, Innovation In The Supply Chain,
Intelligent Adaptive Logistics, AI, Cognitive Analytics, Logistics Efficiency,
Digital Maturity. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
A NOVEL PVDF-COPPER NANOPARTICLE-BASED NANOGENERATOR FOR EFFICIENT ENERGY
HARVESTING |
Author: |
V. VIJAYALAKSHMI, Dr. K. S. GEETHA, Dr. SHANMUKHA NAGARAJ, R. THIAGARAJAN |
Abstract: |
The speedy development of energy harvesting technologies has highlighted the
requirement for efficient and sustainable means of addressing the rising demand
for self-powered electronic devices. This research investigates the design of a
polyvinylidene fluoride PVDF-Copper nanoparticle-based nanogenerator for energy
harvesting purposes, aiming to improve flexibility, biocompatibility, and
cost-effectiveness. The simulated nanogenerator is designed and modeled with the
help of COMSOL Multiphysics to analyze its physical, electrical, mechanical, and
chemical behavior under changing environmental conditions. During the simulation
stage, charge buildup, stress distributions, and mechanical deformation are
analyzed to optimize the choice of materials and structural design. Once
optimized, fabrication is done by employing advanced material deposition and
electrode patterning methods to fabricate an experimental prototype with maximum
energy conversion efficiency. The artificially designed nanogenerator is
thoroughly characterized to evaluate its physical, electrical, chemical, and
mechanical properties to determine its reliability and stability in various
operational conditions. The characterization involves voltage and current output
measurement, mechanical flexibility evaluation, and chemical stability testing
by FTIR. Performance evaluation finally takes into account signal conditioning
circuits to check the actual practical suitability of the nanogenerator. The
experimental results exhibit the efficiency of the nanogenerator in harvesting
energy, a peak output voltage of 5.2V, and power density of 120 µW/cm˛ for an
applied force of 10N. Signal-conditioning circuit integration stabilizes the
power delivery and renders the nanogenerator very useful in wearables,
biomedical implants, and industrial sensors. The findings of this study
highlight the potential of PVDF-Copper nanoparticle-based nanogenerators to
advance self-powered systems further, paving the way for further research in
sustainable energy technologies. |
Keywords: |
Nanogenerator, Energy Harvesting, PVDF, Copper Nanoparticles, COMSOL
Multiphysics, Signal Conditioning Circuits, Mechanical Deformation, Self-Powered
Systems, Sustainable Energy Solutions |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
PERSONALISED LEARNING USING DEEP LEARNING TECHNIQUES |
Author: |
M HARSHITHA SYAMALA, V LIKHITHA, DIVYA LINGINENI, PRASANNA DRL |
Abstract: |
With the growing popularity of online learning and e-education platforms, the
importance for adaptive learning as per individual learning needs has been
increasing. Although existing solutions have come a long way, they still lack
the ability to capture a wide range of learning styles and truly deliver
personalized learning experiences. This project suggests a comprehensive
framework combining sentiment analysis and knowledge graph technologies to
enhance personalization in educational platforms. With the use of advanced
models such as BERT-Bi-LSTM-Attention for analyzing course reviews and knowledge
graphs to represent user interaction, course relationships and store user
preferences, the system can break cold-start issues and ensure that course
recommendations are more transparent. The methodology includes an approach to
leverage learner motivation and learning outcomes by delivering a more
responsive and personalized digital education environment. The solution also
includes a feedback loop and dynamically generates quizzes based on completed
courses helping students assess their progress and identify areas for
improvement, offering a scalable model that can be utilized to adapt to various
educational environments and offers continuous enhancement in personalized
learning. |
Keywords: |
Personalization, Course Recommendations, BERT-Bi-LSTM-Attention, Knowledge
Graph, Feedback Mechanism. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
EARLY FIRE HAZARD PREDICTION FRAMEWORK IN SMART CITIES USING DEEP LEARNING WITH
ANTLION OPTIMIZATION ALGORITHM |
Author: |
DR.G. BHUVANESWARI, DR.G. MANIKANDAN, M. SANDHYA, DHANESH KUMAR, DR. ZIAUL HAQUE
CHOUDHURY, PRABHAKARA RAO T |
Abstract: |
This study aims to refine the early fire risk prediction model for
evaluating the accurate locations of fire using sensor data. Internet of Things
(IoT) is integral to smart cities. IoT applications in smart cities include
crime predictions, traffic optimization and monitoring of health and
environmental conditions. This article reports a study on using Recurrent Neural
Network (RNN) with Ant Lion optimization (ALO) framework to enhance and refine
the prediction of fire hazards. IoT sensors in smart cities monitor the
environmental conditions such as drought, temperature, smoke, flame, relative
humidity, fuel moisture and duff moisture. This sensed data is stored in the
firebase cloud storage and analyzed in the MATLAB tool. The ensuing enhancement
of the proposed model is validated by comparison with conventional prediction
models. Our results indicate gains in accuracy and reduction in error rates in
fire hazard predictions. |
Keywords: |
Environment, fire hazards, Internet of Things, Smart city. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
CYBERSECURITY ASSESSMENT FRAMEWORK FOR HEALTHCARE INSTITUTIONS PRE-MEDICAL
CYBER-PHYSICAL SYSTEM ADOPTION |
Author: |
MOHAMED AWADA ELMAGBARI, MAHEYZAH MD SIRAJ, SITI HAJAR OTHMAN |
Abstract: |
Medical cyber-physical systems enable the remote monitoring of patients, thereby
enhancing accessibility to care. Unfortunately, secure adoption is still
difficult, as this remains an unresolved topic that includes cybersecurity and
privacy issues. While a number of frameworks exist in the general realm of CPS
cybersecurity readiness, few frameworks address the healthcare domain. This
study proposes a Cybersecurity Assessment Framework (CAF) together with a
quantitative scorecard to assess cybersecurity readiness before healthcare
institutions embark on MCPS adoption. Meta-analysis of existing frameworks,
coupled with expert interviews, has resulted in five Critical Success Factors
(CSFs) being established: reliability, validity, third-party authentication,
security, and transparency. Furthermore, a case study approach was adopted with
IT managers and healthcare professionals of two Libyan hospitals. Results
indicated that the CAF is valid and usable and supports the secure adoption of
MCPS, though privacy remains a concern. This work presents a novel,
domain-specific CAF for healthcare cybersecurity, followed by tools nurturing IT
governance. |
Keywords: |
Critical Success Factors, Cyber-Physical Systems, Cybersecurity Assessment
Framework, Healthcare Industry, Medical Cyber-Physical Systems. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
PREDICTION OF NON-ALCOHOLIC FATTY LIVER DISEASE (NAFLD) USING DNA PATHOLOGICAL
DATA AND SUPPORT VECTOR MACHINES |
Author: |
T. V. K. P. PRASAD, SRI LEKHA BANDLA, N. SRIKANTH, K KAVYA RAMYA SREE, INAKOLLU
ASWANI, PAMULA UDAYARAJU, BODDU L V SIVA RAMA KRISHNA |
Abstract: |
Non-Alcoholic Fatty Liver Disease (NAFLD) has emerged as one of the most
prevalent liver disorders globally, affecting nearly one-third of the
population, with particularly high incidence rates in countries like the UK.
Despite its widespread occurrence, accurate estimation of its prevalence remains
a challenge. Early-stage NAFLD, typically characterized by simple steatosis, can
silently progress to more severe conditions such as non-alcoholic
steatohepatitis (NASH), fibrosis, and cirrhosis if left untreated. This
progression significantly compromises liver function and increases the risk of
cardiovascular complications. However, current diagnostic methods, including
magnetic resonance spectroscopy and ultrasound imaging, are often limited by
cost, accessibility, and diagnostic specificity. Given the clinical urgency and
the limitations of conventional diagnostics, this study addresses the critical
need for an accessible and accurate method to detect early-stage liver
disease—specifically, to predict NASH within the NAFLD spectrum. We propose a
machine learning-based approach that leverages clinical and pathological data,
including blood parameters and ultrasound-derived tissue characteristics, to
support early detection. Using a dataset of 181 patients, we applied
preprocessing techniques such as normalization and categorical encoding to
prepare the data for modelling. Features such as integrated backscatter (IB),
Q-factor, and homogeneity factor (HF) were extracted to quantify liver tissue
characteristics. Support Vector Machine (SVM), chosen for its balance of
simplicity and efficiency in handling high-dimensional datasets, was employed
for classification and regression tasks. Experimental validation using
Python-based implementations demonstrated the model's effectiveness, achieving
an average accuracy of 89.95% across both clinical and imaging-derived datasets.
This study underscores the potential of machine learning in improving early
diagnosis of liver diseases and reducing their long-term clinical burden. |
Keywords: |
Fatty Liver Diseases, Non-Alcoholic Fatty Liver Diseases, NASH, SVM,
Pathological Information, Machine Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
A NOVEL HYBRID APPROACH FOR BRAIN TUMOR CLASSIFICATION USING LIQUICON-NET AND
ENHANCED PREPROCESSING |
Author: |
PAVAN KUMAR PAGADALA, TAN KUAN TAK, R. THIAGARAJAN, PRAVIN RAMDAS KSHIRSAGAR |
Abstract: |
This paper introduces a thorough methodology for categorizing brain tumors using
the BRATS dataset, utilizing sophisticated image processing and machine learning
techniques. The technique commences by obtaining and adjusting brain scan images
to a consistent size of 256x256 pixels by spline interpolation. The photos are
subsequently transformed into grayscale using the luminosity approach, which
simplifies the data while preserving crucial structural details. The Total
Variation (TV) denoising technique is utilized to diminish noise and maintain
essential characteristics, leading to the production of superior pre-processed
photographs. The pre-processed images are subjected to segmentation using the
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm,
which efficiently classifies data points into core points, border points, and
noise. This segmentation process reveals discrete regions of interest within the
brain scans. Afterwards, Haralick descriptors are utilized to extract features
from the gray-level co-occurrence matrix (GLCM). These features include
Contrast, Correlation, Energy, and Homogeneity. These characteristics offer a
comprehensive representation of the texture and spatial connections within the
photographs. Dimensionality reduction is accomplished by employing t-Distributed
Stochastic Neighbor Embedding (t-SNE) to streamline the feature set while
maintaining optimal performance. The classification task utilizes the
LiquiCon-Net model, which combines Liquid Neural Networks (LNNs) and
Convolutional Neural Networks (CNNs). The model integrates ResNet-50's
high-level spatial features with the temporal information derived from the
Liquid Time-Constant Network (LTCN). The fusion layer combines these features,
and the ultimate classification is determined by a sequence of densely connected
layers, which are then followed by a softmax function. The performance
evaluation of the suggested model is carried out utilizing a confusion matrix,
ROC plot, and measures such as accuracy, sensitivity, and specificity. The
proposed model attains a precision of 99.42%, a selectivity of 99.62%, and a
responsiveness of 99.81%. The results exhibit exceptional performance in
comparison to current models, such as EfficientNets, Capsule Networks (CapsNet)
+ VGG19, and AlexNet + GoogleNet Ensemble. The thorough examination and
impressive measurements highlight the efficiency of the suggested approach in
precisely categorizing brain tumors, presenting substantial prospects for
enhancing diagnostic precision in clinical environments. |
Keywords: |
Brain Tumors, Glioma, LNN, LTCN, Medical, Deep learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
THE USE OF INNOVATIVE TECHNOLOGIES FOR THE PREVENTION AND DETECTION OF CRIMES
RELATED TO DOCUMENT FORGERY |
Author: |
HANNA VALIHURA, IHOR NOVIKOV, MARIIA DIAKUR, OLENA YUSHCHYK, VLADYSLAV KUTSENKO |
Abstract: |
The relevance of the study is underscored by the proliferation of digital
forgery and the need for comprehensive solutions to ensure the authenticity of
documents and their legal validity in notarial proceedings. The aim of the study
is to substantiate and validate the architecture of preventive forgery detection
with increased invariance, compatibility and cryptosecurity. The research
employs methods as follows: technological typification, comparative analysis,
technical and architectural decomposition, functional enhancement, and
simulation prediction. An analysis encompassing 33 services and 7 technology
classes revealed that complex SSI solutions exhibit the highest level of
completeness. The optimized Microsoft Entra Verified ID demonstrates a reduction
of 44.7% in latency, 61.6% in anchoring time, and 57% in cloud usage, alongside
a 2.5% enhancement in detection accuracy − thereby substantiating the
effectiveness of protocol level forgery resistance. The scientific novelty lies
in the development of a SSI architecture that integrates BBS+, ZKP and Edge AI
thereby harmonizing crypto resilience, offline validation, and an elevated
degree of interoperability. Prospects for future research include the
development of a prototype SSI architecture designed for the field testing of
efficiency, scalability and resilience under real operational load and variable
network conditions, as well as evaluating its suitability for electronic
notarial proceedings. |
Keywords: |
Cryptographic Signatures; Blockchain Anchoring; Decentralized Identifiers (DID);
Zero-Knowledge Proofs; Decentralized Storage |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
SECURING 6G WIRELESS TRANSMISSION THROUGH QUANTUM KEY DISTRIBUTION INTEGRATED
WITH VISIBLE LIGHT COMMUNICATION |
Author: |
HARIPRASAD.B,K.P. SRIDHAR |
Abstract: |
The advent of 6G wireless networks requires strong physical layer security (PLS)
measures to guard off ever-changing cyber dangers and guarantee extremely
dependable communication. In congested locations, traditional security systems
based on radio frequency (RF) encounter challenges such as increased
interference, eavesdropping concerns, and spectrum congestion. To tackle these
issues, the present research investigates how Visible Light Communication (VLC)
might be incorporated into 6G networks to supplement existing methods of
improving PLS. Minimizing interference from outside sources during VLC signal
transmission, guaranteeing safe key exchange, and finding the optimal location
for system performance and security are all significant obstacles. This research
presents a new framework Quantum-Enhanced Secure VLC (QES-VLC), which uses
adaptive beamforming in conjunction with quantum key distribution (QKD) to
improve data security and reduce the probability of interception. This method
guarantees a smooth handoff between hybrid VLC-RF networks and restricts illegal
access by taking advantage of VLC's inherent directional transmission features.
Secure vehicle communications, industrial automation, and smart city
infrastructure of the future are all areas where this research could be
beneficial. This approach enables ultra-secure, high-speed wireless networks by
incorporating VLC into the 6G ecosystem; this improves security without reducing
efficiency. An improvement in resilience against eavesdropping is achieved at
99.5%, a decrease in bit error rate to 15.7%, and an increase in secrecy
capacity to 98.2%. An extensive simulation analysis is performed using MATLAB to
test the suggested approach. The results showed that secure key exchange
efficiency improved to 97.3%, while system latency and power consumption are
lowered to 28.3% and 18.9%, respectively. In addition, its effectiveness in
secure data transfer has been experimentally validated in a real-time VLC
experimentation. |
Keywords: |
6G, Physical Layer, Security, Incorporating, Visible, Light,
Communication, Transmission, Quantum, Key Distribution |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ENHANCING FAULT DIAGNOSIS AND IMPROVING PRODUCTIVITY IN INDUSTRIAL MANUFACTURING
USING DEEP LEARNING TECHNIQUES |
Author: |
RAJESH KUMAR VERMA, KALLI SRINIVASA NAGESWARA PRASAD, K.S.RANJITH,
V.S.N.MURTHY4,I.NAGA PADMAJA, P.THIRUMOORTHY,BH.KRISHNA MOHAN |
Abstract: |
Fault diagnosis in industrial manufacturing is a critical issue that affects the
productivity and efficiency of manufacturing processes. Outdated methods for
fault diagnosis often rely on manual inspections, which are time-consuming and
prone to errors. This framework proposes a deep learning-based fault diagnosis
model to improve productivity in industrial manufacturing. The JAYA optimization
algorithm and Fast Grid Search (FGS) are employed to optimize the
hyperparameters of the model. The proposed model is implemented in MATLAB
software and evaluated using a dataset of industrial manufacturing process data.
The results show that the proposed model achieves high accuracy and precision in
fault diagnosis, outperforming traditional methods. The model can identify
faults early, reducing downtime and improving overall productivity. The findings
indicate that predictive maintenance and optimized feature importance
significantly enhance performance metrics and reduce downtime, with notable
improvements in accuracy up to 0.99 and substantial cost savings, contributing
to a return on investment of around 85%. The development of a more efficient and
reliable fault diagnosis system for industrial manufacturing has contributed to
this framework. Future scope includes integrating the proposed model with other
machine learning algorithms and incorporating sensor data from multiple sources
to further improve its performance. |
Keywords: |
Deep Learning, Fault Diagnosis, Industrial Manufacturing, JAYA Optimization
Algorithm, Fast Grid Search, and CNN Architecture. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
SECURE SOFTWARE DEVELOPMENT THROUGH AI-DRIVEN CODE VULNERABILITY ANALYSIS |
Author: |
RAMAKRISHNA KOLIKIPOGU, NEHA BELWAL, DR.SABITHA KUMARI FRANCIS, DR. S. N. V.
JYOTSNA DEVI KOSURU, DR SUBBA RAO POLAMURI, RAMESH BABU PITTALA |
Abstract: |
Software development has been drastically transformed by the emergence of
automatic vulnerability detection systems generated by the rapid expansion of
artificial intelligence (AI). Application programs frequently encounter security
threats due to error-prone & time-consuming traditional techniques of vulnerable
code detection. Using ML and DL methods, the present study proposes an AI-driven
solution to code vulnerability detection. Using Natural language processing
(NLP) and analysis of static code, the recommended approach identifies possible
risks, including buffer overflow, SQL injection, as well as cross-site scripting
(XSS). To optimise code security with the aid of the framework's inbuilt
AI-based detection of vulnerabilities and real-time feedback. The outcomes of
the performance assessment show that the claimed AI-based model excels
traditional methods in terms of both error identification accuracy and
efficiency. This research illustrates the significant role of AI in enhancing
software security and gives useful insights into forthcoming developments in
cybersecurity driven by AI. |
Keywords: |
Software Development, Machine Learning, Deep Learning, Cybersecurity, Natural
Language Processing, Vulnerable Code Detection. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
BLOCKCHAIN - ENABLED DIGITAL IDENTITY MANAGEMENT FOR ENHANCED SAFETY
SYSTEMS |
Author: |
Dr D Naga Tej, Dr R V V Murali Krishna, Dr Chinnarao kurangi, Sagar Sathuluri,
Dr Subba Rao Polamuri, Dr Hari Jyothula |
Abstract: |
Digital identity management that incorporates blockchain technology provides a
safe and effective way to enhance safety systems. Typical issues with older
methods of managing identities include data breaches, identity theft, and
unauthorized access. This encrypted, decentralized, and irreversible blockchain
method ensures 100% accurate identification verification by eliminating any weak
points and consolidating power. By further automating the authentication
process, smart contracts aim to boost transparency while lowering fraud risks.
This paper focuses on how safety-critical settings may leverage blockchain-based
digital identity management systems to provide data accessibility, integrity,
and privacy. In addition to assisting corporations in meeting and exceeding
security laws, blockchain, a distributed ledger technology, gives individuals
more control over their data. The suggested method discourses problems like
interoperability, scalability and regulatory constraints, paving the way for a
more secure and reliable structure. The primary emphasis of this research is on
the revolutionary potential of blockchain technology as it pertains to identity
management. Therefore, current safety systems will have enhanced dependability,
security and efficiency. |
Keywords: |
Blockchain, Data Breaches, Digital Identity Management, Distributed Ledger,
Safety Settings. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ELEVATING VOICE DIAGNOSTICS: SAVA UNLEASHES NEW FRONTIERS IN HEALTHY AND
PATHOLOGICAL VOICE DETECTION |
Author: |
ABDUL REHMAN ALTAF, HAIRULNIZAM MAHDIN, AWAIS MAHMOOD, ABDULLAH ALTAF, MUHAMMAD
HUSSAIN, SAJID ISLAM |
Abstract: |
The detection of pathological voices is a pressing and crucial concern that
necessitates a thorough exploration of voice signal properties within healthcare
settings. While a plethora of voice features exists, typically leveraged through
machine learning techniques to distinguish between healthy and pathological
voice signals, yet new voice features are required for more promising results.
This study introduces a novel voice feature known as the Sum of the Absolute
Values of Amplitudes (SAVA) of voice signals. The development of this feature is
meticulously detailed in an algorithmic fashion. To ensure its robustness and
reliability, a rigorous evaluation technique, known as K-fold cross-validation,
has been employed. This approach not only validates the effectiveness of our
feature but also provides insights into its stability and generalization
capabilities. Utilizing this feature, a novel framework called the SAVA-Based
Classifier (SAVABC) has been devised. In addition, the Guassian Naďve Bayes
(GuassianNB) machine learning classifier was chosen for implementation.
Extensive voice datasets, comprising both healthy and pathological samples from
the Saarbrucken Voice Database (SVD), were utilized for experimentation. The
results of the simulations are highly promising, achieving an accuracy of 90.21%
for vowel classification and 90.65% for sentence classification. Based on these
statistics, we assert that the proposed SAVABC system holds the potential for
deployment in real-world healthcare settings to detect pathological voices in
patients. This system has the capacity to replace manual assessment methods in
healthcare, offering substantial convenience to potential patients. |
Keywords: |
Healthy Voice, Pathological Voice, Machine Learning, Voice Feature, Signal
Processing |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ATTENTION-ENHANCED LSTM MODEL FOR INTRUSION DETECTION IN IMBALANCED NETWORK
TRAFFIC DATA |
Author: |
YAKUB REDDY.K , G.SHANKARLINGAM |
Abstract: |
Intrusion Detection Systems (IDS) face significant challenges in identifying
minority attack classes within imbalanced network traffic, leading to
compromised security in critical systems. To address this, we propose an
Attention-Enhanced Long Short-Term Memory (AE-LSTM) model that integrates
multi-head attention mechanisms with Long Short-Term Memory (LSTM) networks for
robust intrusion detection. The model is trained on the NSL-KDD dataset using a
comprehensive preprocessing pipeline that includes one-hot encoding,
normalization, and SMOTE-based oversampling to mitigate class imbalance,
particularly for rare attack types such as User-to-Root (U2R) and
Remote-to-Local (R2L). Our architecture incorporates an LSTM layer with
multi-head attention, residual connections, and dense layers with dropout
regularization. Experimental results demonstrate a classification accuracy of
98.43% and a Top-5 accuracy of 100%. ROC-AUC scores reached 1.00 for most
classes, and Precision-Recall analysis confirmed high sensitivity for minority
attacks. Visualization via t-SNE revealed distinct inter-class separation. The
proposed AE-LSTM model significantly enhances detection performance on
imbalanced datasets, presenting a promising approach for next-generation
intrusion detection systems (IDS). |
Keywords: |
Intrusion Detection, LSTM, Attention, SMOT, NSL-KDD |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
CROSS-MODAL ADAPTIVE META-FREE LEARNING FOR SCALABLE CONTINUAL ZERO-SHOT
GENERALIZATION |
Author: |
RAFIAH S B , B PRAJNA |
Abstract: |
Continual Zero-Shot Learning (CZSL) involves training models to learn
sequentially from separate data streams while effectively generalizing to unseen
classes without revisiting earlier data. However, conventional approaches often
face issues such as catastrophic forgetting and weak generalization, especially
under noisy, multimodal, or low-resource conditions. To overcome these
limitations, the proposed work introduced Cross-Modal Adaptive Meta-Free
Learning (CAMeL) a scalable, task-free learning framework. CAMeL incorporated a
Cross-Modal Generative Memory to synthesize both visual and semantic features,
ensuring knowledge retention across tasks. It also features a Neural Attribute
Synthesizer that generates context-aware prompts, enhancing adaptability to
challenging learning conditions. The framework is further optimized through
Continual Learning Adaptive Sharpness-Aware Minimization (CLASAM), which
flattens the loss landscape to promote stability and generalization. CAMeL
effectively supports multimodal learning, reduces forgetting, and handles both
zero-shot and few-shot tasks. Experimental results across six benchmarks
including CUB, AWA1, and SUN show that CAMeL+CLASAM achieves up to 7.5% higher
harmonic mean than existing methods, proving its robustness and scalability. |
Keywords: |
Continual Learning, Zero-Shot Learning, Sharpness-Aware Optimization, Prompt
Learning, Cross-Modal Learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
UTILIZING AZURE AUTOMATED MACHINE LEARNING FOR SALES PREDICTION |
Author: |
HADI SYAHRIAL , FIKRI SALAM , TANTY OKTAVIA |
Abstract: |
This research assesses the use of Azure Automated Machine Learning (Azure
AutoML) at Company XYZ, a prominent flour manufacturer in Indonesia, to overcome
the constraints of traditional forecasting techniques. Utilizing the
Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology and Azure
AutoML's no-code architecture, predictive models were constructed employing five
years of historical data. The Voting Ensemble model proved to be optimum, with
the Normalized Mean Absolute Error (NMAE) surpassing the Normalized Root Mean
Squared Error (NRMSE), attaining an NMAE of 0.21278 in training and enhancing to
0.12118 in testing. The Relative Deviation Averages (RDA) for Products K and Y
during one semester were decreased to 10.18% and 0.76%, respectively, surpassing
traditional approaches characterized by significant variability. To ascertain
dependability, predictions were juxtaposed with actual sales data over a
six-month period using semester-based RDA calculations, yielding findings that
demonstrated considerable improvement over traditional techniques.
Notwithstanding its benefits, Azure AutoML encountered constraints in automated
preprocessing activities, necessitating user intervention using Microsoft SQL
Server for data cleansing and preparation. The no-code interface allowed
non-expert users to deploy models inside Company XYZ's Microsoft environment;
nonetheless, successful implementation required a fundamental understanding of
statistics and preprocessing methods, including outlier identification, as well
as proficiency in MS SQL Server (Transact-SQL). This paper presents a scalable
system for improving prediction accuracy via the integration of CRISP-DM
methodology, SQL preprocessing, and Azure AutoML, hence facilitating AI
deployment in resource-limited settings. |
Keywords: |
Automated Machine Learning, Azure AutoML, CRISP-DM, Sales Prediction, Time
Series |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
INTERNET OF THINGS CONSUMER ELECTRONICS CYBERSECURITY APPROACH BASED ON DEEP
LEARNING |
Author: |
MORARJEE KOLLA, PRAVEENA BAI DESAVATHU, K NARAYANA RAO, NALLA SIVA KUMAR,
HANUMANTHA RAO BATTU, JOHN T MESIA DHAS, SUBRAMANYAM KUNISETTI |
Abstract: |
Protecting linked devices from potential weaknesses and dangers is the primary
goal of security in the Internet of Things (IoT) consumer electronics. The
collection, transmission, and storage of sensitive information by these smart
devices necessitates stringent security measures to prevent hacking, data
breaches, and unauthorized access. To ensure the security and privacy of user
data, it is essential to utilize strong encryption, secure authentication
procedures, and to update software regularly. Users can start and operate drones
anytime, anywhere, and they offer a bird's-eye view. Criminals and
cybercriminals, however, have begun to exploit drones maliciously. These attacks
are extremely dangerous and destructive, and they happen frequently and with a
high probability. Therefore, the requirement for investigative work and
preventative measures necessitates a desire for protection. Drones equipped with
deep learning (DL) for intrusion detection can take control of complex NN
structures, enhancing security monitoring in ever-changing outside settings.
These drones can fly alone, armed with high-tech sensors and computing power, to
scan the sky for dangers like suspicious activity, unapproved people or
vehicles, and then label them accordingly. Drones equipped with DL techniques
can instantly recognize and respond to complicated patterns and irregularities,
paving the way for proactive security measures. An improved method for drone
platforms based on mathematical modeling, known as MGOADL-CS, which stands for
Mountain Gazelle Optimization with Attention to Deep Learning for Cybersecurity,
is presented in this paper. By identifying assaults with the help of optimal DL
models, the MGOADL-CS approach seeks to enhance cybersecurity in the drone's
environment through the use of BC technology. Starting with input data
normalization, the MGOADL-CS method employs a linear scaling normalization (LSN)
strategy. When it comes to dimensionality reduction, the MGOADL-CS method relies
on an improved tunicate swarm algorithm (ITSA) based feature selection strategy.
By the way, cyberattacks are detected and classified using the attention long
short-term memory neural network (ALSTM-NN) model. At last, the ALSTM-NN model's
hyperparameter values are fine-tuned using the MGO-based hyperparameter tuning
procedure. In order to showcase the improved attack detection outcomes of the
MGOADL-CS method, a comprehensive simulation set is completed using the NSL
dataset. The accuracy rating of 99.71% demonstrated by the performance
validation of the MGOADL-CS method was higher than that of previous approaches. |
Keywords: |
Internet of things, Cons Cybersecurity, Unmanned aerial vehicles,
Optimization of mountain gazelles, Deep learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
AN INTELLIGENT SYSTEM FOR HAJJ CROWD MANAGEMENT USING DATA MINING TECHNIQUES |
Author: |
BASHAIR FAHAD , ISLAM R. ABDELMAKSOUD , HAZEM EL-BAKRY |
Abstract: |
Every year, millions of Muslims come to Mecca to participate in the Hajj
pilgrimage. Due to the large number of attendees, there are a variety of
logistical and safety concerns that must be resolved. Many times, modern crowd
management techniques can be inefficient, resulting in tragic great loss of life
events such as the 2015 stampede. In this paper, we explore the use of machine
learning methods to predict crowd concentration during the Hajj and Umrah
pilgrimages. For this purpose, we apply the Hajj and Umrah Crowd Management
dataset available on Kaggle. Our aim is to classify remote sensing crowd density
features into three classes, Low, Medium, and High based on certain conditions
such as time of the year, weather, and health data. The dataset requires
preprocessing such as rescaling, imputation of missing values, and encoding of
categorical variables. Feature selection is performed using mutual information
to eliminate irrelevant factors that do not aid in predicting crowd density.
Hold-out and 5-Fold Cross-Validation techniques are used to train and assess
five classification models: Random Forest, K-Nearest Neighbors (KNN), Support
Vector Machines (SVM), Decision Trees, and Logistic Regression. The results show
that Random Forest performs better than the other models. When feature selection
is used, it attains maximum accuracy and F1-scores. The outcomes show how well
machine learning predicts crowd density, and Random Forest turns out to be the
most dependable model for handling sizable crowds during the Hajj and Umrah. |
Keywords: |
Hajj Crowd Management, Feature Selection, Data Mining Techniques, Mutual
Information, Machine Learning in Crowd Management. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
SELF-ADAPTIVE TRANSIENT GENERALIZED FUNCTIONAL REGRESSIVE CRYPTOSYSTEM BASED
AUTHENTICATION FOR SECURED DATA TRANSMISSION IN PERSONAL AREA NETWORK |
Author: |
R.ABARNA SRI , K.DEVASENAPATHY |
Abstract: |
Personal Area Network (PAN) is to connect electronic devices. Personal devices
namely laptops, wearable devices, and other peripherals employed in PAN.During
the communication, PAN is susceptible to different attacks caused by the
malicious device within the network. Security plays an important role in
preserving the data from different attacks during communication. Secure
transmission was discussed with numerous researches. But higher confidentiality
was major challenging issues. Self-Adaptive Transient Generalized Functional
Regressive Cryptographic Authentication (SATGFRCA) technique is designed to
enhance data confidentiality in PANs. The SATGFRCA technique employs the Benaloh
Public Key Homomorphic Cryptosystem to achieve improved data confidentiality. In
SATGFRCA technique, nodes or devices register their information with the base
station (BS) to facilitate authentication. The BS then generates a Self-Adaptive
Transient key pair (public and private keys) for each registered node. Upon
receiving the key pair, sender node encrypts data packets via public key as well
as transmits toward receiver node. Upon receiving the data, the receiver node
verifies its authenticity using Generalized Functional Regressive Analysis. Once
authenticity is confirmed, the authorized receiver node decrypts the data using
its private key. This process ensures data packets remain secure, enhancing both
security and confidentiality through the SATGFRCA technique. An experimental
evaluation was conducted based on factors such as authentication accuracy,
authentication time, data confidentiality rate, and data delivery ratio,
considering varying numbers of nodes and data samples. Outcome of SATGFRCA
provides higher data confidentiality rate, data delivery ratio, and reduced
authentication time. This finding confirms the SATGFRCA technique efficiency and
effectiveness compared to existing approaches |
Keywords: |
Personal Area Network, security, authentication, Benaloh Public Key Homomorphic
Cryptosystem, Self-Adaptive Transient key, Generalized Functional regression |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
MESSAGE CONVERSATION BASED SOCIAL ENGINEERING ATTACK DETECTION USING MACHINE
LEARNING |
Author: |
SEAH NI MIN , NOR FAZLIDA MOHD SANI |
Abstract: |
Social engineering attacks present a major threat in today's interconnected
world, exploiting the intricacies of human communication to deceive individuals
and extract sensitive information. With the increasing reliance on messaging
platforms, communication has become highly informal, often involving colloquial
language, abbreviations, and dynamically evolving linguistic styles. These
characteristics obscure user intent and make it difficult to identify malicious
or deceptive behaviour. Detecting such threats requires a deep understanding of
conversational context, which is often lacking in current approaches thereby
leaving people vulnerable to subtle social engineering attacks embedded within
everyday messages. This study addresses this gap by fine-tuning DistilBERT, a
state-of-the-art natural language processing (NLP) model, to detect social
engineering attacks in message conversations. Leveraging its ability to
understand contextual semantics while maintaining computational efficiency, the
model was trained and evaluated using the SMS Spam Collection dataset. The
proposed approach achieved a high detection accuracy of 99.46%, outperforming
previous models such as SOCIALBERT. While the results demonstrate strong
classification performance, limitations include the use of a single text-based
dataset and the exclusion of multimodal content such as images and links. Future
work should explore more diverse and multilingual datasets, incorporate
multimodal detection, and optimise the model further for deployment in
resource-constrained environments. |
Keywords: |
Messages, Social Engineering Attack, NLP, Machine Learning, DistilBERT. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
DEVELOPMENT OF AN EXAM CHEATING DETECTION SYSTEM USING DEEP LEARNING-BASED
FACIAL RECOGNITION TECHNOLOGY |
Author: |
MUHAMMAD SYAHRIANDI ADHANTORO, GANNO TRIBUANA KURNIAJI, RAHAYU FEBRI RIYANTI,
HARUN JOKO PRAYITNO, EKO PURNOMO, DIAN ARTHA KUSUMANINGTYAS, ANAM SUTOPO |
Abstract: |
Cheating in exams, especially in online exams, has become a major challenge for
educational institutions. One common form of cheating is the use of exam proxies
and disguise attempts using photos or videos. To address this issue, this study
develops a fraud detection system based on facial recognition technology using
deep learning. The system is designed to automatically identify exam
participants, monitor their presence during the exam, and prevent impersonation
attempts using images or videos. The research methodology follows the ADDIE
model (Analysis, Design, Development, Implementation, Evaluation), encompassing
needs analysis, system design, deep learning model development, implementation
in real exam scenarios, and system performance evaluation. A facial recognition
model based on ResNet-50 is applied to enhance detection accuracy, while a
liveness detection feature ensures real-time presence verification of exam
participants. The study results indicate that the system achieves an average
accuracy of 96.8% in recognizing participants' faces, performing best under
normal lighting conditions and frontal face angles. Testing across various
scenarios demonstrates the system’s capability to detect fraud, such as the use
of exam proxies, impersonation via photos/videos, and participants leaving the
exam screen. The implementation of this system has contributed to reducing
cheating cases by up to 80% compared to AI-unmonitored exams. The study
concludes that facial recognition-based fraud detection systems can enhance
transparency, security, and academic fairness in online exams. However, several
challenges remain, including technical constraints on participants' devices,
potential biases in facial recognition, and ethical and privacy concerns
regarding biometric data. This research provides a significant contribution to
AI-based academic monitoring and serves as a reference for future developments
in technology-driven exam supervision systems. |
Keywords: |
Deep Learning, Exam Cheating, Face Detection, Face Recognition |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ENHANCING GROUP DYNAMICS: SOCIAL PRESENCE, COGNITIVE PRESENCE, AND EXPERIENCE IN
VOICETHREAD DISCUSSIONS |
Author: |
SITI NAZLEEN ABDUL RABU, SHARON LEE JIA CHIAN, AHMAD SYAFIQ BIN MOHD NASIR,
NURULLIZAM JAMIAT |
Abstract: |
The study explored undergraduate students’ perceptions of social presence and
cognitive presence during collaborative online discussions using VoiceThread. A
mixed-methods approach was employed, incorporating an online questionnaire with
close- and open-ended questions, analysis of students’ VoiceThread comments, and
interviews. Purposive sampling was used to select a cohort of 61 undergraduate
education majors. Quantitative data from Likert items were subjected to
descriptive and correlation analyses to examine the perceived social presence
and cognitive presence, and their interrelationship. Content analysis of
commenting modes revealed the frequency of text, audio, video, and doodling
comments, supported by open-ended responses to explore students’ preferences.
Additionally, thematic analysis was conducted on interviews with a subset of 15
students to examine their experiences using VoiceThread for discussions.
Findings indicated that students positively perceived both social and cognitive
presences in VoiceThread discussions, with a moderately positive correlation
between the two. Text-based comments were most frequently used, followed by
audio, while video and doodling were used less often. In terms of its
application for group discussion, VoiceThread was valued for its
user-friendliness, commenting flexibility, and ability to foster collaborative
engagement, though the non-threaded comment structure in the free version was
viewed unfavorably. Technical and environmental challenges, such as poor
internet connectivity and noisy settings, also hindered effective use. The study
suggests the need for clear guidance, practice sessions, and supportive
conditions to encourage the use of media-based comments, fostering meaningful
interaction and knowledge construction in online group discussions. |
Keywords: |
Computer-mediated Communication (CMC), VoiceThread, Online Discussion, Community
of Inquiry, Social Presence, Cognitive Presence, Commenting Modes, Experience |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
GHOSTFREAK: ENHANCED STEGANOGRAPHY IN PRINT-SCAN ENVIRONMENTS USING DEEP
LEARNING |
Author: |
S. KOMAL KOUR, SOHAN GUNDOJU, SREE VAIBHAV DUVVURI, DR. T ADILAKSHMI, T JALAJA |
Abstract: |
GhostFreak, a novel deep steganography framework designed to address the
challenges inherent in print-scan pipelines. Traditional steganographic
approaches often struggle with the distortions introduced during the printing
and scanning process, leading to significant degradation in the quality and
recoverability of embedded data. To overcome these limitations, GhostFreak
strategically leverages three distinct color spaces—RGB for digital displays,
HSI for human visual perception, and CMYK for printing—to achieve robust and
imperceptible data embedding while maintaining resilience against real-world
distortions. Framework extends prior research on print-scan steganography by
integrating a U-Net GAN architecture, which processes a concatenated tensor of
multi-color representations alongside a secret code to generate a residual
image. This approach ensures that the hidden information remains intact while
preserving the quality of the Stego image. A key innovation of GhostFreak is its
dynamically weighted loss function, which balances multiple loss
components—secret loss, secret decay loss, LPIPS (perceptual similarity) loss,
residual loss, and edge loss—to optimize the model's performance across
different training phases. During initial training stages, the model prioritizes
accurate encoding of the secret message, whereas in later stages, the emphasis
shifts towards ensuring high-fidelity image reconstruction. We validate the
effectiveness of Ghost Freak through comprehensive experiments on a large-scale
image dataset, demonstrating that our method withstands the degradations
introduced by print-scan operations. Comparative evaluations against prior
steganographic approaches highlight significant improvements in terms of
imperceptibility, robustness, and recoverability of the embedded data. The
results indicate that GhostFreak is a promising advancement in deep
steganography, offering a practical and resilient solution for secure data
embedding in printed media. |
Keywords: |
Image Steganography, Deep Learning, Print-Scan, Color Spaces, U-Net GAN, Dynamic
Loss Weighting, Error Correction. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
EXPLORING QUANTUM MACHINE LEARNING ALGORITHMS FOR ENHANCED CHEMICAL TOXICITY
CLASSIFICATION |
Author: |
KALIDINDI VENKATESWARA RAO, KUNJAMNAGESWARA RAO, GOKURUBOYINA SITARATNAM |
Abstract: |
Classical computational approaches are steadily improving in their ability to
predict molecular properties based on their sequences or structure, which is
essential in drug discovery, irrespective of its many challenges. The major
challenge is to use computational approaches to identify the properties, such as
toxicity, solubility, etc. (ADMET), of the drug targets without costing much
time. Machine learning and deep learning methods are employed for predicting
ADMET properties. The proposed approach combines quantum computing with machine
learning for the binary task of toxicity prediction based on the SMILES data.
This Quantum Machine Learning (QML) approach involves a novel five-step
procedure that converts SMILES to their respective molecular fingerprints, which
are in turn dimensionally reduced, consequently forming four qubits. These
qubits are inputs to the QML models, such as the Quantum SVC (QSVC), Variational
Quantum Circuits (VQC), and Quantum AutoEncoders (QAE). The models are trained
and optimized by using different optimizers and then evaluated based on the
accuracy metric. The QAE model outperformed the remaining model by achieving an
accuracy of 99%. |
Keywords: |
ADMET, SMILES, QML, QAE, QSVC, VQC |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
IOT SYSTEM USING DEEP LEARNING FOR REAL-TIME HEALTH MONITORING AND ALSO PRIMARY
RECOGNITION OF HEALTH PROBLEMS |
Author: |
KASI VENKATA KIRAN , T SRINIVASA RAO |
Abstract: |
The significance of remote health monitoring is very important to enhancing
patient care and decreasing healthcare expenses has grown in current years owed
towards the rising occurrence of chronic illnesses in addition the ageing
population. There has been a lot of buzz lately about Internet of Things (IoT)
as a possible solution for remote health observation and monitoring. Systems
built on the Internet of Things may gather and process a plethora of
physiological data, by means of heart rates, temperature readings, blood oxygen
stages, then electrocardiogram (ECG) signals, and then provide doctors immediate
input on how to proceed. In home healthcare settings, this study suggests the
Internet of Things (IoT) founded scheme for remotely detecting, monitoring then
primary identification of health concerns. The system is made up of three
different kinds of sensors: a MAX30100 that measures heart rate and blood oxygen
levels, an AD8232 that records ECG signals, and an MLX90614 that takes
temperature without touching the skin. We use the MQTT protocol towards direct
the data we've gathered to a main server. Then the main server habits the deep
learning architecture that has already been trained using an attention layer in
a convolutional neural network to categorise possible illnesses. In addition to
normal heart rate, the system can distinguish between five distinct types of
heartbeats based on electrocardiogram (ECG) data: premature ventricular
contraction, supraventricular premature beatings, unclassifiable beatings, and
synthesis of ventricular. Not only that, the device tells you whether the
patient's oxygen levels and heart rate are normal or not. If serious
irregularities are found, the system will link the user to the closest doctor
for additional diagnostics. |
Keywords: |
Deep Learning, Internet of Things (IoT), Convolutional Neural network, Sensor,
Health Monitoring |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
MULTI-USER EMOTION RECOGNITION IN CROWDED SCENES VIA ENHANCED PARTICLE
SWARM OPTIMIZED RECURRENT NEURAL NETWORKS |
Author: |
BHAGYASHREE DHARASKAR, PHANI KUMAR KANURI, M. SHARWANI, SANDEEP RASKAR, DR. N.
SREE DIVYA, DR. KANAKA DURGA HANUMANTHU, DR. MANISHA P. MALI, RAMESH BABU
PITTALA, M. NAGABHUSHANA RAO |
Abstract: |
Psychological group level emotion recognition (GER) is significant because it
facilitates understanding and identifying the behavior of people in large
congregations, organizations, and other facilities that require surveillance.
Some issues include occlusions, dynamic facial expressions, variation in pose,
and even low-resolution faces. A new framework of combining the EPSO (Enhanced
Particle Swarm Optimization) algorithm for identifying significant features to
tackle these challenges and using RNN to learn the sequences involved. The
two-fold process involves feature reduction followed by meaningful group emotion
classification, taking into consideration the temporal dynamics of emotions. The
Acted Facial Expressions in the Wild (AFEW) dataset is used to validate the
proposed EPSO-RNN model by comparing it with the baseline methods, such as CNN,
SVM, and VGG-16. The experimental findings reveal better EPSO-RNN results in
different measures of recognizing group-level emotions. In order to enhance the
capabilities of the existing PSO, the proposed EPSO-RNN framework finally
combines the enhancement of the feature space optimization and sequential
emotion modeling by using recurrent neural network structures. Unlike existing
methods that primarily rely on deep convolutional architectures or conventional
classifiers with limited adaptability to real-world group settings, this study
fills a significant gap by introducing a hybrid EPSO-RNN model that jointly
optimizes feature selection and temporal emotion learning. The novelty lies in
leveraging enhanced particle swarm optimization to distill high-impact features
from noisy group environments, followed by RNN-driven modeling of emotional
evolution over time. Experimental validation on the AFEW dataset shows a
substantial improvement in classification accuracy and F1-score, outperforming
traditional CNN, SVM, and VGG-16 baselines. These findings confirm the
framework's potential for robust and scalable deployment in practical
surveillance and organizational emotion analytics scenarios. The abstract
clearly states that the study introduces a hybrid EPSO-RNN model, offering new
insight into combining feature selection and temporal learning for improved
group emotion recognition in real-world conditions. |
Keywords: |
Group-level Emotion Recognition, Recurrent Neural Network (RNN), Enhanced
Particle Swarm Optimization (EPSO), Feature Selection, Deep Learning, Crowd
Emotion Analysis, Video Frame Processing, |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
DEEP LEARNING-DRIVEN AUTOMATED IOT DEVICE IDENTIFICATION USING FULL PACKET DATA |
Author: |
SINGAMANENI KRISHNAPRIYA |
Abstract: |
The rapid proliferation of Internet of Things (IoT) devices has introduced
significant challenges in network security, device management, and traffic
monitoring. Accurate and automated IoT device iden- tification is critical for
ensuring secure communication, anomaly detection, and enforcing access control
policies. Traditional identification methods, relying on static rule-based
approaches or shallow learning techniques, struggle with the increasing
diversity and evolving communication patterns of IoT devices. In this study, we
propose a novel deep learning-driven framework that leverages full packet data
analysis to achieve robust and scalable IoT device identification. The framework
integrates Convolutional Neu- ral Networks (CNNs) for spatial feature extraction
and Long Short-Term Memory (LSTM) networks for temporal pattern learning,
enabling it to effectively capture packet header structures, payload
distributions, and sequential dependencies in IoT traffic. Additionally, dropout
regularization is employed to enhance generalization and mitigate overfitting,
ensuring resilience across heterogeneous IoT environments. The proposed method
is evaluated using benchmark IoT datasets, including UNSW IoT-23 and NB-IoT,
which demonstrate superior classification accuracy, scalability, and
adaptability compared to existing approaches. Experimental results highlight the
effectiveness of hybrid deep learning models in IoT security, achieving high
precision and low false positive rates in device identification. This research
underscores the potential of full packet data-driven deep learning approaches to
fortify IoT network defenses and advance next- generation automated
cybersecurity solutions. |
Keywords: |
Deep Learning,Full Packet Data Analysis,Convolutional Neural Networks (CNN),Long
Short Term Memory (LSTM),Anomaly Detection |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
AI-POWERED JOB APPLICATION MANAGEMENT FOR APPLICANTS |
Author: |
ZI QING CHEW , NOR FAZLIDA MOHD SANI |
Abstract: |
During the job search process, a resume serves as a job applicant's first
impression and will determine whether they will progress in the hiring process.
One of the major challenges job seekers faced is ensuring that their resumes
align with job requirements, as qualification mismatches can lead to missed job
opportunities. The advancement of Applicant Tracking Systems (ATS) presents an
additional challenge, as resumes lacking relevant keywords or proper formatting
may be automatically rejected by ATS during the automated resume screening
phase. To address these challenges, an intelligent job application management
system has been developed, contributing to IT research through novel integration
of transformer-based NER models with interpretable job-resume matching metrics
for analyzing and improving resume contents. The methodology involved a
semi-automated annotation process to prepare the annotated resume dataset,
followed by training Named Entity Recognition (NER) models using the spaCy and
Flair libraries. These trained models were evaluated and compared based on
precision, recall, and F1-score metrics. The job-resume matching score was
calculated by comparing TF-IDF vectorization of NER-extracted skills from
resumes and job descriptions using cosine similarity, followed by normalization
with the Sigmoid function. Experimental results showed that the spaCy model
achieved an F1-score of 85.71%, outperforming the Flair NER model, which
achieved an F1-score of 79.92%. This research advances IT applications in human
resource technology by assisting job applicants in enhancing and tailoring their
resumes to better match desired job roles, increasing their chances of passing
ATS resume scans and being shortlisted for job interviews. |
Keywords: |
Resume; Job-Resume Matching; NER; Cosine Similarity; Natural Language
Processing. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ADVANCED LUNG CANCER DETECTION MODEL WITH U-NET-BASED SEGMENTATION, ARI-TFMOA
FEATURE OPTIMIZATION AND MACHINE LEARNING CLASSIFICATION |
Author: |
SATISHKUMAR PATNALA, ANNEMNEEDI LAKSHMANARAO , SUJAN BABU VADDE , KESHETTI
SREEKALA , MOUPALI ROY, Dr. ASMITA MANNA |
Abstract: |
Lung cancer is among the deadliest diseases globally, highlighting the
importance of early and accurate detection to enhance survival chances.
Traditional deep learning models face challenges such as high computational
costs and lack of interpretability, while conventional machine learning
classifiers require high-quality feature representations for optimal
performance. To address these challenges, this study proposes an advanced lung
cancer detection model integrating segmentation, feature optimization and
classification using both machine learning and deep learning models. The
framework employs AMTUnet++-ASPP for abnormality segmentation, ensuring accurate
lung nodule detection from CT scan images. The segmented tumor regions undergo
feature optimization using Adaptive Radiant Inertia Tuned Fuzzy Multi-Objective
Algorithm (ARI-TFMOA), which refines extracted features and eliminates redundant
information. The optimized feature vectors are then classified using multiple
models, including ML algorithms and an artificial neural network (ANN) to
distinguish between normal, benign, and malignant lung conditions. To evaluate
the effectiveness of the proposed model, a comparative analysis is conducted by
implementing three different approaches: a baseline CNN for direct
classification, a U-Net-based segmentation followed by ML classification, and
the proposed AMTUnet++-ASPP segmentation with ARI-TFMOA optimized classification
using both ML models and ANN. The results demonstrate that the integration of
advanced segmentation and optimized feature selection significantly enhances
classification performance. In particular, the ANN model outperformed other
classifiers, leading to superior accuracy, precision, and recall values. This
hybrid approach, combining deep learning-based segmentation, feature
optimization, and deep learning classification, provides a computationally
efficient and clinically interpretable solution for AI-assisted lung cancer
diagnosis, reinforcing its potential for early detection and decision support
for radiologists. |
Keywords: |
Lung Cancer Detection, CT scan images, AMTUnet++, ARI-TFMOA, Machine Learning,
Deep Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
LATENCY-AWARE NETWORK SLICING USING DISTRIBUTED RESOURCE ALLOCATION AND REGIONAL
ORCHESTRATION |
Author: |
SAIF SAAD ALNUAIMI |
Abstract: |
Network slicing has emerged as a fundamental enabler for delivering diverse
services with heterogeneous requirements in next-generation communication
systems. However, most existing approaches focus on either bandwidth or
computational resource allocation in isolation, often relying on centralized
architectures that struggle with latency, scalability, and information privacy
across distributed network components. This paper addresses this gap by
proposing a novel distributed network slicing architecture that jointly
optimizes bandwidth and compute resources. The core innovation is the
introduction of a regional orchestrator (RO) a new control plane entity
positioned between base stations (BSs) and cloud nodes to coordinate localized
resource allocation while preserving system privacy and scalability. We develop
a distributed resource allocation algorithm based on the splitting model
(dra-SM) to efficiently manage joint resource distribution without centralized
control. Simulation results show that our approach significantly reduces overall
network latency by approximately 15%—compared to single-resource slicing, while
also achieving faster convergence and service-specific latency guarantees. This
work contributes a scalable, low-latency solution for real-world deployment of
joint network slicing across decentralized infrastructures. |
Keywords: |
Data Slicing, Cloud, Network Slicing, Bandwidth Consideration, Resource
Allocation. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
AI AND IOT-DRIVEN SMART CITIES: REVOLUTIONIZING ENERGY EFFICIENCY AND OPTIMIZING
TRAFFIC FLOW FOR SUSTAINABLE URBAN LIVING |
Author: |
CH GANGADHAR, FRANCIS MULAGANI, KOLLI SRINU, KOLLURU SURESH BABU, ANIL KUMAR
KATRAGADDA, K SWATHI, T MURALIDHARA RAO, CH CHANDRA MOHAN |
Abstract: |
This paper investigates the potential of using AI and IoT to improve energy
efficiency and traffic management in smart cities. This paper proposes an
innovative framework, AI-IoT, that combines real-time traffic control with a
real-time energy management system to alleviate urban congestion,
mitigate energy loss, and reduce carbon emissions. The models can regulate
signaling around street junctions and energy distribution in real-time by
applying sensors, IoT devices for data collection, and AI algorithms for
immediate decisions. The model was validated with simulations and showed a
significant reduction in traffic impact on urban sustainability (12% less energy
consumption, 20% less traffic congestion, and 17% less CO2 emissions). The
findings show how AI and IoT can be harnessed to build smarter and
greener cities, delivering a repeatable approach to designing smarter cities in
the future. |
Keywords: |
Smart Cities, Artificial Intelligence, Internet of Things, Energy Efficiency,
Traffic Flow |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
PHOTOMETRIC WEIGHT FORMULATED TRILATERAL FILTERS FOR ENHANCED PERCEPTUAL QUALITY
IN BIOMEDICAL IMAGING |
Author: |
SUDAGANI JYOTHI , P. MUTHU KRISHNAMMAL |
Abstract: |
This paper presents a novel computational framework for biomedical image
enhancement through the development of Photometric Weight Formulated Trilateral
Filters (PWFTF), advancing adaptive image processing algorithms for clinical
applications. Unlike conventional bilateral and standard trilateral filters, our
proposed method incorporates adaptive photometric weights that respond
dynamically to local image characteristics across different imaging modalities.
We evaluate the performance of our method on Ultrasound, X-Ray, and MRI
datasets, demonstrating significant improvements in noise reduction, edge
preservation, and overall perceptual quality. Comprehensive objective
assessments using established Image Quality Assessment (IQA) metrics show that
our PWFTF method outperforms state-of-the-art filtering techniques by an
average of 17.3% in PSNR, 12.6% in SSIM, and 9.8% in FSIM across all tested
modalities. This framework advances IT research by enabling scalable,
computationally efficient image processing for real-time clinical diagnostics,
reducing overhead compared to learning-based methods. The proposed filter
demonstrates particular effectiveness in preserving diagnostically significant
features while suppressing noise in low-contrast regions, making it suitable for
clinical applications requiring high diagnostic accuracy. |
Keywords: |
Biomedical Image Processing, Trilateral Filtering, Ultrasound Imaging,
X-Ray Imaging, MRI. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
Title: |
ENHANCED WIDE-AND-DEEP NEURAL NETWORKS FOR ROBUST MALWARE DETECTION USING OPCODE
FEATURES AND ZERO-DAY ANALYSIS |
Author: |
G BALA KRISHNA, K.C. SREEDHAR, V. PRADEEP KUMAR, SAIROHITH,THUMMARAKOTI, D.
SANDHYA RANI, DR. MUMMADI RAMACHANDRA |
Abstract: |
Detecting malware is a vital cybersecurity challenge, necessitating more
enhanced means of detecting and fighting such threats. To classify malware
families and detect previously unseen malware, the opcode sequences and metadata
are used with proposed architecture of an Enhanced Wide-and-Deep Neural Network
(EWDNN). Combining wide components for explicit feature interactions and deep
components for abstract feature representations to structure the EWDNN, we
propose a model that outperforms all baselines with an accuracy of 65.46%,
precision of 63.96%, recall of 65.46% and an F1-score of 64.15%. The proposed
methodology exhibits scalability, robustness, and generalization, and
outperforms other traditional approaches, including Support Vector Machines,
K-Nearest Neighbors, and standalone Deep Neural Networks. Extensive experiments
demonstrate that the model is capable of real world malware detection,
especially for zero day threats, showing its usefulness in real world dynamic
cybersecurity situations. |
Keywords: |
Malware Detection, Enhanced Wide-and-Deep Neural Network (EWDNN), Opcode
Sequences, Zero-Day Malware Detection, Scalability, Robustness, Machine
Learning, Cybersecurity. |
Source: |
Journal of Theoretical and Applied Information Technology
31st July 2025 -- Vol. 103. No. 14-- 2025 |
Full
Text |
|
|
|