|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
August 2025 | Vol. 103 No.15 |
|
Title: |
TRAFFIC SPEED PREDICTION AND CONGESTION LEVEL IDENTIFICATION USING FLOATING CAR
DATA |
|
Author: |
FATEMEH AHANIN, NORWATI MUSTAPHA, MASLINA ZOLKEPLI, NOR AZURA HUSIN |
|
Abstract: |
Traffic congestion poses significant challenges to urban mobility and
transportation infrastructure worldwide. Accurate prediction of traffic speed
and timely identification of congestion levels are crucial for effective traffic
management and planning. Owing to the widespread adoption of telecommunication
technologies, various traffic datasets have become available, such as Floating
Car Data (FCD), which collect real-time information from vehicles in transit,
providing a rich and dynamic dataset for analyzing traffic speed. However,
predicting traffic speed and identifying congestion levels using FCD remains
challenging due to the complexities of traffic dynamics and the non-linear
nature of traffic flow. In response, multiple solutions have been proposed using
deep learning methods. This study addresses the persistent issue of FCD data
sparsity and its limitations in providing consistent, accurate traffic speed
predictions. The present work focuses on constructing an LSTM-based method,
called LSTM-C, to predict traffic speed. In the proposed LSTM-C method, a new
Contrast measure is introduced and incorporated to enhance the prediction of
traffic speed across candidate road segments. The LSTM-C model demonstrates a
significant improvement in both prediction accuracy and congestion level
identification, outperforming existing models such as those by Majumdar et al.
and Gao et al. Subsequently, traffic rules are applied to the predicted speeds
to determine congestion levels for each segment. The experimental results
demonstrate that the proposed model achieves a high level of accuracy, reaching
up to 96.697%, which represents an improvement of 1.6% and 1.79% in accuracy
compared to the two benchmark LSTM methods employed for speed prediction. |
|
Keywords: |
Traffic Speed Prediction, Short-Term Speed Prediction, The Long Short-Term
Memory (LSTM), Deep Learning, Data-Driven Traffic Analysis |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
VRBR-NET: A NOVEL DEEP LEARNING MODEL FOR THERMOGRAPHIC BREAST CANCER DETECTION |
|
Author: |
ANUSHA DERANGULA, TAN KUAN TAK, R. THIAGARAJAN, PRAVIN RAMDAS KSHIRSAGAR |
|
Abstract: |
Breast cancer remains a leading cause of mortality worldwide, and early,
accurate detection is critical for effective treatment and improved patient
outcomes. This paper addresses the challenge of advanced breast cancer detection
by introducing IVRBR-Net, a novel deep learning model that integrates Inception
V4, ResNet-50, and Bidirectional Recurrent Neural Networks (RNNs) to analyze
thermographic images. The methodology begins with rigorous preprocessing
steps—grayscale conversion, contrast enhancement via Multipurpose Beta Optimized
Bi-histogram Equalization (MBOBHE), noise reduction through bilateral filtering,
and image refinement using the Affine Projection Algorithm (APA)—to optimize
image quality while preserving essential features. Feature extraction is
performed using Scale-Invariant Feature Transform (SIFT) and Haralick
descriptors, capturing critical statistical properties such as mean, variance,
entropy, and skewness. The model is trained and validated on a dataset of 120
high-resolution thermographic images from the Database for Mastology Research
(DMR). IVRBR-Net achieves outstanding performance metrics, including an accuracy
of 99.82%, specificity of 99.74%, sensitivity of 99.35%, precision of 99.48%,
and an F1-score of 99.68%, significantly outperforming existing state-of-the-art
models such as MobileNetV2, VGG-16 with attention, and ResNet-50. These results
demonstrate the model’s potential as a reliable and precise tool for breast
cancer diagnostics, offering a robust framework for future applications in
medical image analysis and contributing to improved clinical decision-making. |
|
Keywords: |
Accuracy, Breast cancer, Deep Learning, Sensitivity, Specificity, Thermograph |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
RISK PREDICTION OF THEFT CRIMES IN URBAN COMMUNITIES: AN INTEGRATED MODEL OF
LSTM AND ST-GCN |
|
Author: |
MEGAVATH TEJASHWINI , Dr M VENKATESWARA RAO , NAVNATH B. POKALE ,Dr N VENKATESH
, NAVNATH KALE , Dr N ARJUN |
|
Abstract: |
Urbanization has transformed urban communities but also introduced challenges in
managing crime and ensuring public safety. For prevention and control to be
effective, a criminal risk prediction system is necessary. This paper presents a
model that integrates Long Short-Term Memory Networks (LSTM) and
Spatial-Temporal Graph Convolutional Networks (ST-GCN) to identify high-risk
areas in cities. Using topological maps and crime data from Chicago, the model
extracts spatial-temporal and temporal features to analyze theft patterns.
Experimental results demonstrate its ability to predict crime occurrences within
specific timeframes, offering a valuable tool for urban safety planning. The
objectives of this study are (1) to develop an integrated LSTM + ST-GCN model
for theft crime risk forecasting in urban communities, (2) to validate the model
on real-world Chicago crime data, and (3) to benchmark its performance against
standard baselines. Unlike existing methods that focus solely on spatial or
temporal patterns, our hybrid approach fuses both dimensions for near real-time
risk prediction, demonstrating up to 15% lower RMSE compared to standalone LSTM
models. |
|
Keywords: |
Logistic Regression, Support Vector Machines (SVM), Naive Bayes,
Predictive Models, Statistical Modeling. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID EFFICIENTNET-TRANSFORMER ARCHITECTURE FOR AUTOMATED DETECTION OF GLAUCOMA |
|
Author: |
MALLA SIREESHA , MEKA JAMES STEPHEN , P.V.G.D. PRASAD REDDY |
|
Abstract: |
In recent years, the early and accurate diagnosis of Glaucoma, a leading cause
of permanent blindness, has highlighted the importance of effective treatment
and better patient care. Conventional deep learning methods that only use
Convolutional Neural Networks (CNNs) have shown significant performance in
classification tasks involving fundus images, but they often fail to capture
global contextual dependencies that are crucial for fine-grained disease
categorization. Addressing this limitation, a Hybrid CNN-Transformer
architecture has been proposed that uses the local feature extraction capability
of a pretrained EfficientNetB0 backbone and augments it with the global modeling
power of Transformer encoders. The EfficientNetB0 module extracts robust spatial
features, which are improved using Transformer layers to model inter-feature
relationships throughout the image. The proposed Hybrid EfficientNet-Transformer
model was trained and validated on a dataset created by combining five datasets
containing Fundus images, employing standard preprocessing pipelines and label
encoding strategies. Extensive experiments show that the proposed hybrid
approach attains a 99% high classification accuracy on the unseen test set,
outperforming many existing HybridCNNs and transfer learning models. These
results indicate that the fusion of CNNs and Transformers offers a powerful
framework for high-precision glaucoma detection, creating new prospects for the
development of automated ophthalmic screening tools. |
|
Keywords: |
Glaucoma, Convolutional Neural Networks (CNNs), EfficientNetB0, Fundus images,
Transformer encoders. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DYSARTHRIC SPEECH RECOGNITION USING WAVENET CONVOLUTIONAL NEURAL NETWORK WITH
PROSODY CONSISTENCY LOSS FUNCTION |
|
Author: |
ANEETA S ANTONY, ROHINI NAGAPADMA, AJISH K ABRAHAM |
|
Abstract: |
Dysarthric speech recognition concentrates on understanding speech impairments
caused by neurological disorders. Speech recognition increases communication by
enhancing adaptability and clarity for individuals. However, dysarthric speech
recognition struggles to transcribe speech accurately due to variations in
pitch, articulation, and rhythm that significantly differ from typical speech
and processing dysarthric across different speakers. In this research, the
WaveNet Convolutional Neural Network with Prosody Consistency Loss Function
(WNCNN-PCLF) is proposed to recognise and classify dysarthric speech accurately.
In CNN, WaveNet is incorporated for capturing local speech features that enhance
the model’s ability to determine intricate variations and patterns in distorted
speech. The PCLF assist in preserving natural speech patterns, which makes for
more accurate rhythm and tone representation. Therefore, this integration
enables better adaptation to dysarthric speech, which addresses both prosody and
articulation issues effectively. Hence, the proposed WNCNN-PCLF achieves a high
accuracy of 99.92% and 98.34% using UA-Speech and Kannada datasets compared to
existing methods like Densely Squeezed and excitation Attention-gated Network
(DySARNet). |
|
Keywords: |
Dysarthric Speech Recognition, Local Speech, Prosody Consistency Loss
Function, Speakers, Wavenet Convolutional Neural Network. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING-DRIVEN WEB INFORMATION EXTRACTION WITH CNN AND LSTM MODELS |
|
Author: |
B. BHAVANI , Dr. D.HARITHA |
|
Abstract: |
The high rate at which the amount of web data is growing has posed immense
opportunities as well as challenges to the data extraction methods. Traditional
web scraping solutions are easily outmatched by the dynamism and heterogeneity
of web content leading to frequent failure and inefficiency. The article
questions how these weaknesses could be mitigated by using adaptive deep
learning models to extract web data by giving a robust and scalable solution to
the problem. We propose an adaptive deep learning model learning and
generalizing over diverse web structures and types of content using neural
networks. It is a dynamic framework and can fit in changing web environments by
utilizing domain adaptation and transfer learning to ensure consistency and
accuracy in data extraction. We have determined through considerable
experimentation that our adaptive architecture is considerably faster than the
established scraping approach, with accuracy, structural change resilience and a
reduction in manually configured dependency being noteworthy enhancements. These
results illustrate the effectiveness of adaptive deep learning over web data
extraction and lead to the path of much smarter and automated web scraping
systems. |
|
Keywords: |
Web Data, Conventional Methods, Deep Learning, LSTM, Web-Scraping |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITAL PLATFORMS FOR SUPPORTING ENTREPRENEURSHIP: EU EXPERIENCE FOR UKRAINE |
|
Author: |
Nataliia Tiahunova, Olena Zhuk, Viacheslav Kokhno, Pavlo Tereshchenko, Andrey
Skrylnik |
|
Abstract: |
The aim of this research is to assess the impact of the use of digital platforms
for entrepreneurship on business flexibility in the European Union (EU) and to
provide recommendations for Ukraine. The study employed the following methods:
correlation, regression, comparative, statistical analysis. The regression
analysis found that digital technologies by themselves have a smaller impact on
business flexibility than in combination with other important factors. These
factors included talent, capital, regulation, technological infrastructure. The
uniqueness of the Ukrainian resource Diia is noted, which integrates state
services, services for businesses and citizens, and documents in one platform. A
further direction for the platform’s development is proposed through the
integration of AI-based technologies for intelligent form filling, automatic
data verification, personalization of services, and provision of business
advice. The results of the study may be useful for the development of
digitalization policies at the state and enterprise levels by increasing the
focus on the identified influencing factors. |
|
Keywords: |
Digital Platforms, Entrepreneurship, Artificial Intelligence, Capital,
Regulation, Technological Infrastructure. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
APPLICATION OF ARTIFFICIAL INTELLIGENCE IN THE DEVELOPMENT OF EU STRATEGIES TO
STRENGTHEN THE INSTITUTION OF POLITICAL ACCOUNTABILITY |
|
Author: |
YULIIA MELNYK, NADIIA HERBUT, INNA PIDBEREZNYKH, BOHDAN HRUSHETSKYI, MARINA
SHULGA |
|
Abstract: |
Political accountability is a key element of democratic governance and
transparency of state institutions. This is particularly relevant in the context
of the implementation of EU standards, which imply increasing transparency,
anti-corruption measures, and strengthening democratic principles. The study
focuses on the analysis of the effectiveness of EU strategies in strengthening
political accountability in member states and candidate countries, such as the
Netherlands, Poland, Hungary, Ukraine, and Serbia. The importance of this study
is determined by the need to develop effective mechanisms for adapting EU
standards in different political contexts. The aim of the research is to study
the impact of the implementation of European strategies on political
accountability in selected countries, as well as identify the main factors that
contribute to or hinder their effective implementation. The study employed the
following methods: quantitative analysis to assess changes in political
accountability metrics, sociological surveys to determine the level of trust in
governments in five countries, comparative analysis to identify differences in
the implementation of EU standards between member states and candidate
countries, and correlation analysis to assess the relationship between the
implementation of European standards and key indicators of political
accountability. In addition, the research explores how artificial intelligence
(AI) technologies can be applied in the development and monitoring of EU
strategies, in particular through data-driven evaluation of public
administration, risk modelling, and the identification of political misconduct
patterns. The results of the study showed that EU member states demonstrate
higher political accountability indicators due to the effective adaptation of
European standards. Preliminary findings also suggest that AI-supported tools,
such as algorithmic auditing and digital transparency platforms, can strengthen
accountability mechanisms by enabling early detection of corruption risks and
increasing civic oversight. The Netherlands consistently maintains a high
level of democracy and transparency, while Poland and Hungary face certain
challenges due to political polarization and centralization of power. Different
trends are observed in candidate countries such as Ukraine and Serbia: Ukraine
demonstrates gradual progress, while Serbia shows stagnation. |
|
Keywords: |
Political Responsibility, European Standards, Democracy, Government
Transparency, Fight against Corruption, CPI, Social Reforms, Artificial
Intelligence, Digital Governance, Algorithmic Accountability |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
AN EFFECTIVE APPROACH FOR HISTOPATHOLOGIC ORAL CANCER PREDICTION USING
BITTERLING FISH OPTIMIZATION TECHNIQUE |
|
Author: |
PULLAIAH PINNIKA , VENKATA RAO KASUKURTHI |
|
Abstract: |
Oral cancer poses a substantial global health concern, necessitating immediate
action to mitigate its severe consequences. However, existing methods used for
oral cancer classification exhibit limitations such as getting trapped in the
local optima, high sensitivity to noisy data, and poor convergence speed,
resulting in lower prediction reliability and clinical applicability. Hence,
this research proposes a Bitterling Fish Optimization with Probability Entropy
(BFO-PE) algorithm for feature selection during oral cancer classification. The
PE enables the BFO to select optimal feature subsets by simulating the spawning
behavior of bitterling fish, updating iteratively toward an optimal feature set,
thereby enhancing performance. Initially, the Oral cancer (Tips and Tongue)
Image (OCI) dataset and Kvasir Video and Image Repository (KVASIR) dataset are
used to validate the model’s performance. The next step, pre-processing involves
resizing images to standardized dimensions using interpolation. Then, feature
extraction is performed using the ResNet 152 architecture, followed by which
feature selection is carried out. Finally, the ResNet layer is deployed for
classification of oral cancer into binary classes. The proposed approach attains
a superior accuracy of 99.04% on OCI and 99.75% Kvasir datasets, outperforming
existing methods like Convolutional Neural Networks - Improved Tunicate Swarm
Algorithm (CNN-ITSA). These results demonstrate the effectiveness,
generalizability, and potential clinical applicability of the developed system
for oral cancer prediction. |
|
Keywords: |
Bitterling Fish Optimization; Convolutional Neural Networks; Improved Tunicate
Swarm Algorithm; Oral Cancer And Probability Entropy. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
APPLICATION OF CHATGPT BY EDUCATORS AT THE TERTIARY LEVEL: SYSTEMATIC LITERATURE
REVIEW |
|
Author: |
LIUDMYLA HOLUBNYCHA, OLENA KUZNETSOVA, NATALIA SOROKA, TETIANA SHCHOKINA,
TETYANA KOSHECHKINA, OKSANA KOVALENKO |
|
Abstract: |
The advent of the conversational AI model, ChatGPT, has garnered considerable
attention from educators, presenting substantial avenues for pedagogical
innovation and holding extensive promise for integration by tertiary-level
educators. However, a definitive understanding of optimal strategies for
ChatGPT’s effective deployment within university pedagogy remains elusive.
Consequently, this investigation undertakes a review of the extant scientific
literature to address these critical inquiries. The purpose of the research is
to thoroughly synthesize and critically evaluate the practical deployment of
ChatGPT by tertiary level educators, involving a comprehensive assessment of its
application across the initial two-year period following its introduction
(November 30, 2022 – November 30, 2024) using a systematic review of scholarly
publications. The study employed the PRISMA guidelines for data collection and
evaluation, followed by a systematic review. The research findings derived from
this process were then subjected to thematic analysis, organized around the
identified themes and categories. The research results were examined based on
the identified research questions. The results point to the determined in the
article spheres of university educators’ activities referred to and considered
in experimental scholarly publications, in which the ChatGPT use offers
educationally valuable help and assistance to university educators. Pedagogical
activities augmented by ChatGPT are critically examined, highlighting key
findings, benefits and concerns the incorporation of ChatGPT may have in every
determined sphere of tertiary level teaching. The recommendations to prevent or
lessen the likely negative aspects of ChatGPT being part of university
educators’ teaching practice are presented. The unique status of ChatGPT as a
facilitative aid in university instructional settings is ascertained. |
|
Keywords: |
Application of ChatGPT, Higher education, Literature review, Teaching |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
INFLUENCER CREDIBILITY AND BRAND REPUTATION IN SHAPING TRUST AND PURCHASE
INTENTION: EXPLORING GENERATION Z'S RESPONSE TO LOCAL BEAUTY PRODUCTS ON TIKTOK |
|
Author: |
NATHASIA , HANSEN THENDY , JULIUS ROMARIO , ANISA LARASATI |
|
Abstract: |
Driven by societal pressures and beauty standards, skincare products have become
a daily necessity, especially through TikTok, emerging as a prominent platform
for beauty trends. This study explores the impact of brand reputation on
purchase intention for local beauty products among Generation Z, examining the
moderating role of TikTok influencer credibility and the mediating effect of
trust. Although TikTok has become a powerful platform for shaping consumer
behavior, particularly purchase intention, the extent to which influencer
credibility moderates the relationship between brand reputation and purchase
intention remains insufficiently examined. This research addresses this gap in
the literature by proposing a single framework and conducting an experiment to
assess how influencer credibility influences the strength of brand reputation’s
effect on purchase intention. A quantitative approach was applied using a 2
(brand reputation: positive vs. negative) × 2 (influencer credibility: high vs.
low) between-subjects experimental design, involving 135 participants who were
randomly assigned to one of 4 different scenarios. The findings confirm that
brand reputation significantly increases purchase intention. Moreover, an
influencer with high credibility strengthens the positive impact of brand
reputation on purchase intention. However, influencers with low credibility do
not significantly affect consumer purchase decisions, highlighting the
importance of authenticity and expertise over mere popularity. Interestingly,
influencer credibility does not moderate the relationship between brand trust
and purchase intention. Trust was found to fully mediate the relationship
between brand reputation and purchase intention, serving as a crucial
psychological mechanism that converts positive brand perceptions into buying
behavior. This suggests that trust, grounded primarily in consumers’ direct
brand experiences and overall reputation, is less influenced by external
endorsements. These findings highlight the importance of managing brand
reputation and partnering with credible influencers to maximize marketing
effectiveness in Generation Z's purchasing decisions in the local beauty market. |
|
Keywords: |
Brand Reputation, Brand Trust, Purchase Intention, TikTok Influencer
Credibility, Beauty Products |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI-SCALE RETINEX-UNET FOR ENHANCEMENT OF LOW LIGHT WEAK CONTRAST IMAGES |
|
Author: |
MUDDAPU HARIKA, GOTTAPU SASIBHUSHANA RAO, RAJKUMAR GOSWAMI |
|
Abstract: |
The enhanced quality of images is crucial in the realm of image processing
applications. However, images captured in low-light environments often suffer
from poor contrast and noise, leading to a loss of detailed information. To
address this challenge, we propose a Multi-Scale RETINEX-UNET (MSR-UNET) model
for low-light weak contrast (LLWC) image enhancement. This novel approach
integrates a modified U-Net architecture with an improved multi-scale Retinex
(IMSR) model, aiming to preserve natural colors while enhancing visual quality.
The proposed model is validated using the Renoir dataset, and its effectiveness
is measured through PSNR (34.3 dB) and SSIM (99%). Compared to conventional
enhancement techniques, our model outperforms existing methods by effectively
reducing noise while maintaining structural details. This study advances deep
learning-based image enhancement and sets a new benchmark for LLWC image
processing. |
|
Keywords: |
Image Enhancement, Convolutional Neural Networks, Grey Level Co-occurrence
Matrix, U-Net, Multi-Scale Retinex, Deep Learning. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
IMPLICATIONS OF USING ARTIFICIAL INTELLIGENCE TECHNIQUES TO ENHANCE A CHILD'S
SOCIAL SKILLS |
|
Author: |
MHMMAD SULIEMAN AL SHAAR |
|
Abstract: |
Social skills are critical for a child’s development as they are fundamental for
building relationships and succeeding academically and professionally. In
comparison to the academic spheres of reading, language, and mathematics, these
skills are just as valuable. However, children, particularly those with autism
spectrum disorders (ASD) and social communication disorders, struggle with
appropriate social behavior. With advancements in artificial intelligence (AI),
such as natural language processing and computer vision, there are new ways to
enhance social interactions for children. AI technologies can enable children to
receive feedback in real time, engage them with social scenarios, and encourage
social play through developmentally appropriate methods. There are, however,
ethical, educational, and psychological concerns, such as data privacy, the
implications of AI on child development, and the nature of interaction between
machines and children. My goal in this study is to analyze how AI can be
effectively utilized for the promotion of social skills in childhood, current
applications and interventions, and point out gaps for future research. This
research aims to draw attention not only to the existing models but also to the
innovative approaches in order to highlight the potential of AI in transforming
children’s development when used responsibly. |
|
Keywords: |
Social skill, Childhood, Artificial Intelligence, Social Interactions, School
Performance |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN OF AN ITERATIVE METHOD FOR ENHANCED IOT DATA MANAGEMENT AND PROCESSING IN
HEALTHCARE APPLICATIONS |
|
Author: |
VONTERU SRIKANTH REDDY, PADARTHI SWATHI, GOLLA USHA RANI |
|
Abstract: |
This study addresses the critical need for advanced frameworks in the realm of
Internet of Things (IoT) data storage and processing, particularly within the
healthcare sector. Traditional approaches often grapple with the complexity and
diversity of IoT-generated data, leading to inefficiencies in data management
and utilization. Existing methodologies fall short in accurately capturing the
intricate relationships among heterogeneous data sources, including medical
devices, patient information, and environmental parameters. Moreover, they
struggle to process semantic aspects of data effectively, limiting the potential
for nuanced analysis and interpretation operations. To address these gaps, this
study introduces a novel IoT-ML (Internet of Things - Machine Learning)
framework that integrates ontology-based data modeling, semantic Natural
Language Processing (NLP), and adaptive Machine Learning (ML) for enhanced
healthcare data processing. The proposed model includes a deep learning-based
ontology generation engine, Transformer-based semantic interpretation (using
BERT), and an adaptive, resource-aware ML schema to support real-time,
energy-efficient data analysis.. A unified IoT-ML schema is also developed,
integrating semantic and numeric data processing capabilities to support
efficient data querying and interoperability across varied IoT platforms. The
implementation of machine learning models introduces efficient data encoding
strategies, reducing storage requirements while preserving data integrity
levels. Adaptive learning algorithms are designed to accommodate the
heterogeneity of IoT data, optimizing computational complexity. Furthermore, our
models are resource-aware, dynamically adjusting to the computational and
storage limitations of IoT devices & scenarios. Empirical evaluation using
healthcare datasets from IEEE Data Port and Kaggle demonstrates the superiority
of our proposed framework. Results reveal significant improvements over existing
methods [4,5,16], including a 4.9% rise in precision, 4.5% improvement in
accuracy, 5.9% rise in recall, 10.4% reduce in delay, 8.3% growth in AUC, and
5.5% improvement in specificity for classification tasks. These advancements
underscore the potential of our proposed IoT-ML standard in revolutionizing
healthcare data management, promising significant impacts on the precision,
efficiency, and scalability of IoT data processing and storage sets. |
|
Keywords: |
IoT Data Management, Semantic Data Processing, Ontology-Based Modeling, Adaptive
Learning Algorithms, Healthcare IoT Scenarios |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
WHAT DRIVES IMPULSIVE BUYING IN VIDEO GAME MICROTRANSACTIONS? A STRUCTURAL
EQUATION MODELING STUDY OF INDONESIAN GAMERS |
|
Author: |
FARZA ALIF MAHENDRA , RIYANTO JAYADI , TANTY OKTAVIA |
|
Abstract: |
Microtransactions in video games have become a growing monetization strategy but
also trigger impulsive buying behavior. This study aims to evaluate the factors
influencing impulse buying in video game microtransactions in Jabodetabek,
focusing on key variables such as Shopping Enjoyment, Impulse Buying Tendency,
Urge to Purchase, Positive Affect, Negative Affect, and Hedonic Motivation. This
research employs a quantitative approach with 436 respondents, analyzed using
Structural Equation Modeling (SEM) with SmartPLS 3.0. The findings indicate that
Shopping Enjoyment enhances Positive Affect, which subsequently drives Urge to
Purchase and Impulse Buying. Additionally, Impulse Buying Tendency significantly
influences Impulse Buying, whereas Negative Affect and Hedonic Motivation show
no significant impact. These findings provide insights for the gaming industry
to develop more ethical monetization strategies and help players manage their
purchasing behavior more effectively. |
|
Keywords: |
Impulsive Buying, Microtransactions, Monetization, Video Game, Structural
Equation Modeling |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
SECURE USER AUTHENTICATION AND KEY AGREEMENT IN SMART NETWORKS WITH BLOCKCHAIN
GATEWAYS |
|
Author: |
DURVASI GUDIVADA , M. KAMESWARA RAO |
|
Abstract: |
As IoT-driven smart environments such as intelligent cities, buildings, and
industries develop quickly, user authentication is becoming increasingly
important to guard against illegal access and guarantee data security. Because
of issues such as device heterogeneity, computing resource limitations, and
potential single points of failure, conventional and current authentication
methods frequently fail to overcome security issues such as impersonation,
replaying, and denial-of-service attacks in IoT networks. With the goal of
offering lightweight, secure, and decentralized user authentication and key
agreement in smart settings, this study suggests a blockchain-based user
authentication and secure key agreement protocol for smart networks. In this
case, gateway nodes are permitted to create a blockchain to authenticate users
and IoT nodes by integrating cryptographic hash algorithms. The AVISPA tool is
used to formally verify a protocol’s security. The comparison analysis reveals
that our protocol has an execution time of 0.0858 ms, confirming its performance
against more recent research efforts. |
|
Keywords: |
IoT, User Authentication, Smart Contract, AVISPA |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
MINERAL IDENTIFICATION USING ENSEMBLE OF HANDCRAFTED AND DEEP LEARNING FEATURES
WITH XGBOOST |
|
Author: |
SIRAM DIVYA , KUNJAM NAGESWARA RAO |
|
Abstract: |
The industry that utilizes beach sand minerals containing titanium zirconium and
other strategic elements requires precise classification of mineral grains. The
identification techniques commonly used today take long periods to complete and
need experts to interpret them while producing results that lack consistency. A
well-annotated MINET (Mineral Identification NETwork) dataset of high-resolution
mineral grain images serves this study to develop a powerful automated
classification machine learning pipeline which addresses previous limitations.
The proposed framework uses both handcrafted texture color and shape features
with deep ResNet50 model features from extracted representations. These joint
representations produce an advanced system which improves the identification of
intricate mineral formations. Our framework uses XGBOOST as its classifier to
show how features drawn from both handcrafted and deep learning extraction boost
automated petrography systems and strengthens MINET's position as a critical
benchmark for mineral recognition intelligence. |
|
Keywords: |
Mineral Gains,MINET,Handcrafted Features,Resnet50,XGBOOST |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID ATTENTION-BASED DEEP LEARNING MODEL FOR WIND SPEED FORECASTING IN
RENEWABLE ENERGY APPLICATIONS |
|
Author: |
HEMANTH SAI MADUPU, BALA SAIBABU BOMMIDI, IMRAN ABDUL, K. RAMBABU, BEJJAM S N
BENARJI |
|
Abstract: |
Accurate wind speed forecasting (WSF) is critical for the efficient integration
of wind energy into power systems. In this study, a novel hybrid model combining
Ensemble Empirical Mode Decomposition (EEMD), Convolutional Neural Network
(CNN), and Attention-based Long Short-Term Memory (ALSTM)—referred to as
EEMD-CNN-ALSTM—is proposed for short-term wind speed prediction. The model
leverages EEMD to decompose complex and non-stationary wind speed signals into
more manageable components, CNN to extract localized spatial features, and ALSTM
to selectively capture temporal dependencies using attention mechanisms. The
proposed model is evaluated using 1-hour interval wind speed data collected from
two distinct wind farms: Garden City and Idalia. Performance is assessed using
standard metrics including RMSE, MAE, MSE, and R². Results demonstrate that the
EEMD-CNN-ALSTM model consistently outperforms benchmark models and other hybrid
architectures. Across both wind farms, the proposed model achieves the lowest
error rates and highest predictive accuracy, confirming its robustness and
effectiveness in handling the complexities of WSF. |
|
Keywords: |
Attention-Based LSTM, Hybrid Model, Time Series Prediction, Deep Learning,
Renewable Energy, Short-Term Forecasting |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
ROUGH SET BASED PRIVACY PRESERVATION OF HEALTHCARE DATA USING ASSOCIATION RULE
MINING AND GENETIC ALGORITHM |
|
Author: |
SRUTIPRAGYAN SWAIN, BIBHUTI BHUSAN DASH, SUNEETA MOHANTY, BANCHHANIDHI DASH,
PRASANT KUMAR PATTNAIK |
|
Abstract: |
Smart healthcare refers to the combination of emerging trends like Big Data
Analytics, IoT and AI & ML. The shift from traditional systems to smart health
applications results in the generation of massive volumes of data that need to
be managed. This data must be stored on cloud servers and shared with
organizations for future research and applications. However, sharing it with
third-party servers can result in the exposure of sensitive patient information
to external entities. A significant challenge in this context is ensuring the
protection of sensitive information to maintain privacy. Consequently, attribute
reduction plays a crucial role in managing large datasets by removing
unnecessary or redundant data, thereby facilitating the efficient hiding of
sensitive rules before public disclosure. This paper presents a
privacy-preserving framework designed to hide sensitive fuzzy association rules.
The proposed model incorporates two key stages: a pre-activity phase for mining
fuzzified association rules and a post-activity phase for concealing sensitive
rules. Experimental findings confirm the effectiveness of the proposed approach. |
|
Keywords: |
Rough Set (RS), Fuzzy Proximity Ratio (FPR), OIS (Ordered Information System),
Fuzzy Association Rule Mining. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
INTEGRATING BIG DATA ANALYTICS INTO HEALTHCARE DECISION SUPPORT SYSTEMS FOR
BETTER PATIENT OUTCOMES |
|
Author: |
ATLURI LAKSHMI , TADAVARTHI NAGA NAVYA , NELAKUDITI KRISHNAVENI , NELLI
SREEVIDYA , D. NAGA PURNIMA , T.P.S. KUMAR KUSUMANCHI |
|
Abstract: |
This article investigates the incorporation of BDA with healthcare DSSs to
improve the quality of clinical decision-making and patient outcomes. This study
aims to assess the potential of BDA in analyzing a broad range of healthcare
data types, from EHRs to real-time sensor data and clinical histories, to aid in
the prognostication of patient conditions, customized treatment, and Candice
optimization. The approach deploys machine learning classifiers (ANN, RF, and
SVM) on a synthetic dataset of over 500 thousand patient records. The findings
show that the performance of the ANN model is better than the conventional
machine learning models with an accuracy of 91.6%, precision of 0.89, recall of
0.87, and AUC-ROC of 0.92. Comparison with the current DSS models shows superior
predictive precision, decision efficiency, and real-time intervention capacity.
This incorporation of big data improves diagnostic and therapeutic results but
also allows physicians to make real-time suggestions when making decisions,
thereby allowing early clinical interventions. Significance This research
contributes to realizing the transformative capabilities of BDA in the
modernization of healthcare, the reduction of operation ineffectiveness, and
patient services with a substantial impact. |
|
Keywords: |
Big Data Analytics, Healthcare Decision Support Systems, Machine Learning,
Predictive Modeling, Patient Outcomes, Artificial Neural Networks |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
EARLY FIRE HAZARD PREDICTION FRAMEWORK IN SMART CITIES USING DEEP LEARNING WITH
ANTLION OPTIMIZATION ALGORITHM |
|
Author: |
DR.G. BHUVANESWARI, DR.G. MANIKANDAN, M. SANDHYA3, DHANESH KUMAR, DR. ZIAUL
HAQUE CHOUDHURY, PRABHAKARA RAO T |
|
Abstract: |
This study aims to refine the early fire risk prediction model for evaluating
the accurate locations of fire using sensor data. Internet of Things (IoT) is
integral to smart cities. IoT applications in smart cities include crime
predictions, traffic optimization and monitoring of health and environmental
conditions. This article reports a study on using Recurrent Neural Network (RNN)
with Ant Lion optimization (ALO) framework to enhance and refine the prediction
of fire hazards. IoT sensors in smart cities monitor the environmental
conditions such as drought, temperature, smoke, flame, relative humidity, fuel
moisture and duff moisture. This sensed data is stored in the firebase cloud
storage and analyzed in the MATLAB tool. The ensuing enhancement of the proposed
model is validated by comparison with conventional prediction models. Our
results indicate gains in accuracy and reduction in error rates in fire hazard
predictions. |
|
Keywords: |
Environment, Fire Hazards, Internet Of Things, Smart City. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
TASK OFFLOADING IN EDGECLOUD RESOURCE SCHEDULING USING DEPENDANT COMPUTATION
TASK OFFLOAD USING ENHANCED FIREFLY (DCTO-EFF) |
|
Author: |
K.VINOTHKUMAR, DR. D. MARUTHANAYAGAM |
|
Abstract: |
In order to efficiently distribute computing jobs between edge devices and cloud
resources, task offloading is a crucial strategy in edge-cloud resource
scheduling. Numerous optimization strategies have been put forth to maximize
system performance and resource utilization. Among the cutting-edge optimization
algorithms that are carefully investigated in this study are the Dependant
Computation Task Offload Using Enhanced Bacterial Foraging Optimization
(DCTO-EBFO), Firefly Algorithm (FA), Elephant Herding Optimization Algorithm
(EHOA), Social Spider Optimization (SSO), Bee Colony Optimization (BCO), and
Hybrid Grey Wolf Lion Optimization (HGWLO). However, most existing optimization
strategies fail to effectively handle dependent computational tasks and often
suffer from slow convergence and reduced efficiency under dynamic edge-cloud
conditions. Furthermore, Proposed Dependant Computation Task Offload utilizing
Enhanced Firefly (DCTO-EFF), a novel optimization method, is presented and
assessed. At the beginning of the paper, the importance of optimization
strategies in enhancing offloading alternatives is discussed, along with an
overview of job offloading in edge-cloud resource scheduling. Next, a detailed
explanation of each optimization algorithm's benefits and core concepts is
provided. The suggested DCTO-EFF algorithm for work offloading is then
explained, along with its features, capabilities, and advantages. Comprehensive
simulations using EdgeCloudSim are used to compare the efficacy of the
optimization strategies. Performance measurements include things like makespan,
reaction time, energy consumption, resource utilization, latency, convergence
speed, execution time, and delay. All things considered, this study provides
useful details regarding the advantages and disadvantages of various
optimization algorithms for task offloading in edge-cloud resource scheduling,
assisting practitioners and researchers in selecting the optimal algorithm based
on specific deployment scenarios and application needs. |
|
Keywords: |
Task Offloading, Edge-Cloud Resource Scheduling, Optimization Algorithms,
Firefly Algorithm, Elephant Herding Optimization Algorithm, Dependant
Computation Task Offload using Enhanced Firefly. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
TWO LEVEL DENOISING WITH IMAGE BOUNDARY PIXEL BASED SEGMENTATION OF LUNG MRI FOR
ACCURATE LUNG TUMOR DETECTION |
|
Author: |
ANJANEYULU GURRAM , PARTHASARATHY RAMADASS |
|
Abstract: |
Cancer is characterized by abnormal clusters of cells and manifests itself in a
variety of ways. In India, lung cancer is the second leading killer as per the
records of Indian Cancer Society. Patients have a far better chance of survival
when they undergo early detection procedures using Magnetic Resonance Imaging
(MRI) scans. The ability to segment MRI scans is crucial for accurate diagnosis
and treatment in the clinic, making this an essential and important task.
Nevertheless, flaws like low contrast, noise, intensity inhomogeneity, etc.,
frequently taint MRI images. Clinical diagnostics and medical research rely
heavily on MRI, however noise interference frequently compromises the imaging
process. This noise comes from all over, and it lowers the image quality, which
makes it harder for doctors to make correct diagnoses based on the information
they see. Conventional denoising techniques fail to account for MRI images' more
intricate forms of noise, such as Rician noise, because they presume that noise
is normally distributed normally. Consequently, denoising is still an important
and difficult task. Inhomogeneous intensities, juxta-pleural nodules, image
noises, and other similar issues make accurate lung segmentation from medical
images a continuing problem. An efficient noise-denoising approach for images
should keep critical edges intact. There have been several attempts to eliminate
noise using image denoising algorithms, however it is often not possible to do
so completely. But it takes a long time, is easy to make mistakes with, and
needs medical knowledge. This research aims to increase the accuracy of lung
area segmentation in MRI images by proposing an updated two-level denoising and
segmentation framework for lung region segmentation preserving the accurate lung
border. To denoise MRI images of the lungs while keeping their outlines intact,
the proposed system initially employs a filtering strategy based on image
decomposition. The MRI images of the lungs are subsequently segmented using a
combination of enhanced morphological techniques. By utilizing a contour
correction strategy that is based on a quick corner detection technique, the
segmentations are further enhanced in order to rectify and smooth the retrieved
lung outlines. This research proposes a Two Level Denoising with Image Boundary
Pixel based Segmentation (TLD-IBPbS) model for accurate lung tumor detection.
The proposed model is compared with the traditional models and the results
represent that the proposed model performance is high. |
|
Keywords: |
Lung Cancer, Magnetic Resonance Imaging, Denoising Techniques, Lung
Segmentation, Two-Level Denoising, Enhanced Morphological Techniques, Image
Boundary Pixel, Lung Tumor Detection. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
A UNIFIED CNN-BASED FRAMEWORK FOR GENERALIZED AND REAL-TIME COLORECTAL CANCER
PREDICTION AND DIAGNOSIS: BRIDGING DATA GAPS, ENHANCING INTERPRETABILITY, AND
PERSONALIZING OUTCOMES |
|
Author: |
K YOGESWARA RAO , S ADI NARAYANA |
|
Abstract: |
Colorectal cancer (CRC) remains one of the most common and deadly cancers
worldwide, making early detection and accurate diagnosis more important than
ever. In this work, we introduce UniCRC-Net—a smart, CNN-based system designed
to predict and diagnose colorectal cancer in real time, using structured patient
data. Unlike many current machine learning and deep learning models, which
struggle with scattered data, lack of explainability, and generic predictions,
this unified approach brings together multiple patient details—like age, gender,
pathology scores, gene markers, diet, and environment—into a streamlined and
intelligent framework. The model is trained on a carefully constructed synthetic
dataset and optimized using the Adam algorithm over 50 training epochs. It
performs exceptionally well, hitting a perfect 100% accuracy, F1-score, and AUC,
which means it’s both highly precise and consistent in identifying cancer cases.
The results are backed by clear visualizations—such as accuracy and loss graphs,
a confusion matrix, and a sharp ROC curve—demonstrating how stable and
dependable the model is throughout its training. What sets UniCRC-Net apart is
its real-time capability, its ability to personalize predictions, and its
transparent design, which makes it easier to trust in clinical use. It's also
built with the future in mind—ready for integration with federated learning
systems that protect patient privacy while enabling collaboration across
hospitals and regions. In short, this framework not only fills major gaps in CRC
diagnostics but also moves us a step closer to AI-powered, patient- specific
cancer care that’s fast, secure, and clinically meaningful. |
|
Keywords: |
Colorectal Cancer, Deep Learning, Convolutional Neural Network, Real-Time
Diagnosis, Personalized Healthcare, ROC-AUC, Clinical Decision Support and
Interpretability in AI |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
EFFICIENT BIG DATA STORAGE SOLUTIONS FOR DISTRIBUTED CLOUD COMPUTING SYSTEMS |
|
Author: |
RAVURI DANIEL, BODE PRASAD, Y SREERAMAN, KANDRAKUNTA CHINNAIAH, DORABABU
SUDARSA, INDHUMATHI RAVICHANDRAN, MANIDHEER BABU GORIKAPUDI |
|
Abstract: |
Storing large volumes of data in distributed cloud computing systems can cause a
problem, as managing both kinds of data, structured or unstructured, is
difficult with existing tools like the Hadoop Distributed File System (HDFS) and
object storage systems. Existing solutions are not effective at managing both
throughput, growth capacity, and affordability simultaneously, particularly for
data workloads with a wide range of use patterns. Although HDFS is designed for
ordered data and Amazon S3 handles a large amount of unstructured data, there
are no solutions in these approaches to quickly access or allocate resources to
different kinds of data. To address this gap, a new hybrid architecture is
proposed that combines HDFS with Amazon S3 and allocates data according to its
type and how often it is used. The aim of this research is to explore how it
integrates different systems to improve data retrieval time, ensure fault
tolerance, and make everything cost-efficient for both old and new data in the
cloud. By measuring the systems using the KDD Cup 1999 and UNSW-NB15 datasets,
it is found that the hybrid model retrieves data faster, more reliably, more
scalable, and more economical than HDFS and object storage used alone. The
results of this study describe a stable and practical solution for data storage
in the cloud and set a path for new studies and practical cloud data storage
improvements. |
|
Keywords: |
Hybrid Storage, Big Data, Distributed File Systems, Object Storage, Data
Retrieval, Cloud Computing |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
NEURONET: AN ADVANCED DEEP LEARNING FRAMEWORK FOR EARLY DIAGNOSIS AND PREEMPTIVE
TREATMENT OF NEURODEGENERATIVE DISEASES |
|
Author: |
ASAD HUSSAIN SYED , AYESHA SIDDIQUA , K. MOHAMMADI JABEEN , SABIHA MAHVEEN |
|
Abstract: |
Globally, citizens are bearing a heavy burden about the ravages of
neurodegenerative diseases such as Alzheimer’s and Parkinson’s. These diseases
develop slowly and are frequently diagnosed at a late stage when therapy
possibilities are restricted. Currently level of diagnostics rely on clinical
examination and imaging but many times early pathological clues are overlooked.
AI driven decay prediction models, including Capsule Networks (CapsNets) and
Sparse Learning Models are developed and achieve improvement, but not efficient
enough to fuse multimodal data jointly and obtain spatial-temporal features
which are critical for early detection and prediction. To fill these gaps, we
propose a novel hybrid deep learning framework named NeuroNet, which is composed
of 3D Convolutional Neural Networks (CNN) for learning spatial features, Long
Short-Term Memory (LSTM) networks for modeling temporal sequences, attention
mechanism for selecting diagnostic regions and clinical biomarkers for
multimodal information fusion. This architecture is fine-tuned by Bayesian
hyperparameter optimization, and validated through ADNI dataset. The primary
contribution of this work is the evidence that multimodal fusion with
attention-informed deep learning, drives a dramatic improvement in diagnostic
accuracy and interpretability. Accuracy, precision and recall achieved by
NeuroNet were 98.62%, 98.50%, 98.70%, and an AUC–ROC value of 0.998,
respectively, showing a superior performance over several
state-of-the-art models. Explainable AI techniques like Grad-CAM and SHAP
increase model transparency even more. This work provides new insights in the
form of a clinically viable and interpretable deep learning model which pushes
the frontier of AI-facilitated neurodegenerative disease diagnosis. The model
can be easily incorporated in clinical diagnostical systems for early
intervention and better prognosis. |
|
Keywords: |
NeuroNet, Deep Learning, Neurodegenerative Diseases, Multimodal Integration,
Early Diagnosis |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING-POWERED SKIN DISEASE CLASSIFICATION: OPTIMIZING TRANSFER LEARNING
FOR IMPROVED ACCURACY |
|
Author: |
SIPRA SAHOO , PRABHAT KUMAR SAHU, SMITA RATH, MITRABINDA KHUNTIA , MONALISA
PANDA , DEEPAK KUMAR PATEL , NIBEDITA JAGADEV , SHRABANEE SWAGATIKA |
|
Abstract: |
Skin diseases represent a prevalent global health challenge, with prevalence
rates in India ranging from 7.9% to 60%. While deep learning approaches show
promise for automated diagnosis, existing methods exhibit limitations in
robustness and generalizability due to: (1) inadequate feature selection
methodologies, (2) suboptimal data augmentation strategies, (3) unexplored
hybrid frameworks and (4) insufficient validation of attention mechanisms in
clinical settings. These study addresses these gaps through three key
innovations: comprehensive optimizer-architecture benchmarking, a novel
dual-dataset validation framework and an integrated preprocessing pipeline
combining seven enhancement techniques. We systematically evaluate four deep
learning architectures (custom CNN, VGG16, DenseNet-121, Inception-ResNet-v2)
with three optimization algorithms (Adam, SGD, RMSprop) on partitioned HAM10000
datasets containing 10,015 dermatoscopic images across seven disease categories.
Our approach reveals new knowledge: (1) VGG16 with Adam achieves
state-of-the-art 93.14% accuracy- the highest reported for single-model HAM10000
classification; (2) RMSprop unexpectedly outperforms Adam for DenseNet121
(83.86% vs 81.43%); and (3) dataset-specific optimizer behaviors critically
impact clinical applicability. These findings establish that systematic
evaluation of the model optimizer data set significantly improves diagnostic
robustness. Research provides a foundation for affordable and accessible
diagnostic tools with clinically actionable insights for deployment
optimization, which could benefit populations with limited healthcare access. |
|
Keywords: |
Deep Learning, Classification, Convolutional Neural Network, VGG16,
DenseNet-121, Inception-ResNet-v2, Transfer Learnig, Fine-Tuning. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
KNOWLEDGE-INFUSED TRANSFORMER FOR ASPECT-BASED SENTIMENT ANALYSIS USING
CONCEPTNET AND SENTICNET WITH CROSS-DOMAIN GENERALIZATION |
|
Author: |
RAMESH BABU PITTALA, RAMAKRISHNA BOMMA, VEERASWAMY PITTALA, MEDIKONDA ASHA
KIRAN, GARLAPATI NARAYANA, SOLLETI PHANI KUMAR, MANYAM THAILE, LAKSHMI PRASANNA
BYRAPUNENI |
|
Abstract: |
This paper introduces Knowledge-infused Transformer Aspect-based System, a novel
approach to address challenges in Knowledge-Infused Transformer for Aspect-Based
Sentiment Analysis Using ConceptNet and SenticNet with Cross-Domain
Generalization. The KTAS framework is designed with powerful algorithms that
enable it to increase performance metrics by slightly more than 29 percent
improvement over current practices. The feasibility of the framework is evident,
as demonstrated by the accuracy of the results in an experiment using standard
datasets. Specifically, the suggested system involves the combination of
different computer-based methods such as group theory, dynamical systems, and
federated learning to provide a resistant design that would excel the existing
state-of-the-art approaches. KTAS is shown to attain better performance than
others in various evaluation aspects, as evidenced in a detailed testing on Penn
Treebank data and MS COCO data, on the results of which the current work is
based. The implementation of the above limitations in current solutions is to
take advantage of multimodal fusion and interpretability mechanisms, and
therefore better manage the complex data patterns. The experiment's results
indicate that the methodology applies to real-world scenarios with limited
resources and computational complexity, as it is highly accurate. There are also
ablation studies done to compare the input made by each component to the entire
performance, and it can be seen that the encoder-decoder module is especially
exceptional in ensuring the best results. Furthermore, a sensitivity analysis is
conducted, which determines the risks regarding the robustness of KTAS in
different situations, as well as showing that it is stable in various working
conditions. The theoretical analysis gives formal requirements for the
convergence character and computational effectiveness of the algorithm. Lastly,
they discuss the possible areas of the approach and also provide a guideline
towards future research that can be done to extend the capacities of the
proposed system. There is substantial validation to support confidence in the
proposed system. |
|
Keywords: |
Cybersecurity, Human-Computer Interaction, Natural Language Processing, Data
Mining, Software Engineering |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
FORECASTING CURRENCY EXCHANGE RATES USING ARIMA, ETS AND RNN: A MACHINE LEARNING
PERSPECTIVE |
|
Author: |
T. SONI MADHULATHA , DR. MD. ATHEEQ SULTAN GHORI |
|
Abstract: |
Time series forecasting plays a key role in financial decision-making,
particularly in predicting exchange rate fluctuations. In this paper we
implemented three different methods for forecasting the INR/USD exchange rate:
The Autoregressive Integrated Moving Average (ARIMA) method, the Exponential
Smoothing (ETS) method, and a Long Short-Term Memory (LSTM)-based Recurrent
Neural Network (RNN) approach. The ARIMA (5, 1, and 0) model captures past
dependencies using an autoregressive framework, while the ETS model adjusts for
trend-based variations using exponential smoothing techniques. The LSTM-based
RNN, designed to capture non linear patterns, is trained using MinMax-scaled
data with a sequence-based structure. When comparing all 3 models in terms of
predictive accuracy, the deep learning models capturing short-term fluctuations
and nonlinear trends more effectively. And performance evaluation using Mean
Squared Error (MSE) as 36.424298, 17.956008, 0.521610, Mean Absolute Error (MAE)
5.333123, 3.766352, 0.49238, and Root Mean Squared Error (RMSE) as 6.035255,
4.237453, and 0.64872confirms the superior forecasting capability of the deep
learning-based approach. These findings highlight the potential of RNNs in
financial time series forecasting, offering a robust alternative to traditional
statistical models. |
|
Keywords: |
Currency Forecasting, RNN, ARIMA, ETS, Machine Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
HOW CAN GIS INTEGRATION IMPROVE LAND MANAGEMENT FOR SUSTAINABLE DEVELOPMENT IN
TERRITORIAL COMMUNITIES |
|
Author: |
SERHII SHEVCHUK , NATALIIA PROKOPENKO , ANDRIY VYNNYK , OLEKSII MUKHIN , DENYS
SITKO |
|
Abstract: |
Despite the increasing availability of open geospatial data and the widespread
use of geographic information systems (GIS), there remains a significant gap in
the literature regarding the systematic integration of GIS into community-level
land management for sustainable development. Existing studies often focus on
isolated applications or sectoral use cases, lacking a holistic framework for
operational implementation in decentralised settings. This study addresses this
gap by developing and validating a comprehensive concept of GIS integration into
land management processes, tailored to the needs of territorial communities. The
proposed approach covers the full analytical cycle: from the collection,
cleaning, and normalisation of satellite and cadastral data to environmental
sustainability assessment, spatial visualisation, and decision-making. Drawing
on multi-source data (Sentinel-2, Landsat 8, public cadastral registers,
crowdsourcing platforms), and using QGIS, ArcGIS, GeoDA, and Google Earth
Engine, the study conducts a comparative analysis of software capabilities in
spatial modelling, including the construction of NDVI, LST, and ESI indices,
interpolation, regression, and scenario-based cluster analysis. A novel
Environmental Sustainability Index (ESI) is introduced, integrating factors such
as urbanisation, vegetation cover, erosion susceptibility, and anthropogenic
pressure, and visualised as multilayer thematic maps. A metropolitan case study
demonstrates the operational value of GIS in land use planning and provides an
adaptable algorithm for community-level implementation. The results contribute
new knowledge by establishing a replicable GIS integration model that enhances
spatial accuracy, planning transparency, and environmental resilience. Key
challenges identified include standardising data formats, expanding digital
skills, and aligning with national legislation. This study offers both
methodological innovation and practical tools for advancing sustainable land
governance in resource-constrained environments. |
|
Keywords: |
GIS; Sustainable Development; Territorial Communities; Environmental
Sustainability; Natural Resources |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
MODEL AGNOSTIC META LEARNING – LONG SHORT-TERM MEMORY FOR LEARNING STYLE
CLASSIFICATION |
|
Author: |
AMRUTH K JOHN , BINU THOMAS |
|
Abstract: |
An adaptive learning system aims to enhance the effectiveness of the educational
process by tailoring it to individual students. A key aspect of this adaptation
involves identifying the most suitable learning approach, based on Visual,
Auditory, and Kinesthetic (VAK) learning styles. However, accurately classifying
learning styles remains a challenge due to the presence of concept drift, which
affects the network’s ability to generalize across different learners. To
address this, the Model Agnostic Meta Learning – Long Short-Term Memory
(MAML-LSTM) model is proposed in this research study for effective learning
style classification. MAML is incorporated into the LSTM network to identify
shifts in classification patterns and adapt to new learners quickly. Rather than
retraining the network from the beginning, the model dynamically fine-tunes the
LSTM in response to concept drift, thereby improving its generalization
capability. The MAML-LSTM integration enables rapid adaptation to concept drift
by fine-tuning on limited new data, eliminating the need for complete
retraining. This enhances the model’s ability to maintain high classification
accuracy across dynamic learner behaviors. Additionally, Local Interpretable
Model-agnostic Explanations (LIME) are employed after classification to
highlight key features, ensuring greater transparency and interpretability. The
proposed MAML-LSTM achieves 97.77% accuracy, 97.72% precision, 97.72% recall,
97.72% F1-score, 97.72% specificity, and 99.81% AUC on the VAK learning style
dataset, outperforming existing algorithms. |
|
Keywords: |
Auditory, Kinesthetic, Learning style, Long Short-Term Memory, Model Agnostic
Meta Learning and Visual. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
EARLY SEPSIS DETECTION FROM ICU DATA USING WAVELET FEATURE EXTRACTION WITH
HYBRID DEEP LEARNING FRAMEWORK |
|
Author: |
Dr. V. GOKULA KRISHNAN , Dr. PINAGADI VENKATESWARA RAO , Dr. K. SREERAMAMURTHY ,
Dr. R. DHARMAPRAKASH , Dr. B. PRATHUSHA LAXMI |
|
Abstract: |
For patients in intensive care units (ICUs) to have better outcomes after
suffering from sepsis, a potentially fatal illness, prompt and precise
identification is essential. Timely care and better patient outcomes depend on
early sepsis detection. It is common for traditional diagnostic procedures to
miss the early signs of sepsis, which means that treatment is delayed. Using the
open-source MIMIC-III clinical database, we provide a new hybrid diagnostic
system for early sepsis identification that makes use of deep learning, signal
processing, and metaheuristic optimisation. We start by using physiological
time-series data including temperature, respiration rate, blood pressure, and
heart rate to extract robust, multi-scale, and noise-resistant features using
the Wavelet Scattering Transform (WST). Afterwards, a one-dimensional
input-adapted AlexNet-based Convolutional Neural Network is trained with these
features to learn the high-level feature representations necessary to
differentiate among septic and non-septic patients. We optimise hyper parameters
including learning rate, filter sizes, and dropout rates using the Hunger Games
Search (HGS) method, a new metaheuristic inspired by nature, to further enhance
model accuracy and generalisation. Benchmark CNN architectures and traditional
machine learning models are outperformed by the suggested WST-AlexNet-HGS
pipeline. Classification performance above 95% as shown by evaluation measures
like accuracy, precision, recall, and F1-score, demonstrating strong diagnostic
reliability. Given the model's efficacy, it might be used as a clinical decision
support tool in intensive care units operating in real-time. This hybrid
pipeline demonstrates >95% accuracy in early detection of sepsis from ICU data,
outperforming conventional methods. The model's robustness and reliability
position it as a viable decision support system for real-time clinical use. |
|
Keywords: |
Intensive care units, Hunger Games Search, Wavelet Scattering Transform, Sepsis
detection, Convolutional Neural Network, AlexNet. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
SWARM OPTIMIZED MACHINE LEARNING MODEL FOR ENHANCED PREDICTION OF CORONARY
ARTERY DISEASE |
|
Author: |
JAYAMOL P. JAMES , Dr. GNANAPRIYA S. |
|
Abstract: |
Coronary artery disease poses a critical health challenge requiring accurate and
timely diagnosis. This research introduces an optimized classification framework
that integrates Artificial Bee Colony (ABC) algorithm with Naive Bayes (NB) to
strengthen prediction performance. The working mechanism begins with feature
extraction from a benchmark coronary dataset, where ABC acts as a bio-inspired
swarm intelligence method to identify the most relevant clinical attributes by
mimicking the intelligent foraging behavior of bees. By eliminating redundant
and noisy features, ABC enhances the learning environment for the Naive Bayes
classifier. The probabilistic nature of NB then utilizes these refined features
to compute posterior probabilities for classification, ensuring efficient
decision boundaries. Experimental validation shows that the ABC-NB model
achieves 93.45% accuracy, 94.23% sensitivity, and 92.56% specificity,
outperforming standard classifiers in early-stage CAD detection. The model’s
simplicity, interpretability, and high predictive value make it suitable for
integration into clinical workflows. Its low computational complexity enables
deployment on embedded medical systems. This framework promotes scalable,
cost-effective, and accurate cardiac risk prediction. Future advancements will
involve real-time prediction support, dynamic retraining with streaming patient
data, and expansion to multi-modal datasets for comprehensive cardiovascular
profiling across diverse populations and clinical scenarios. |
|
Keywords: |
Coronary Artery Disease, Artificial Bee Colony, Naive Bayes, Feature Selection
Medical Diagnosis, Swarm Intelligence |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITAL TRANSFORMATION OF PUBLIC SERVICES SYSTEMATIC REVIEW OF KEY SUCCESS
FACTORS |
|
Author: |
MOUTAMIR Hajar , El QOUR Tahar |
|
Abstract: |
The digital transformation of public services, particularly through
e-government, is crucial to increasing the efficiency, transparency and
accessibility of government services, especially in developing countries.
Although significant efforts have been made, public sector organizations
continue to face persistent challenges in the implementation of digital
services, often leading to suboptimal outcomes. While numerous studies have
explored critical success factors (CSFs) across different contexts, a systematic
consolidation of these findings remains limited, hindering both theoretical
integration and practical application. This systematic review addresses that gap
by applying the PRISMA approach to data collected between 2004 and 2024 from the
Scopus and Web of Science databases, aggregating findings from multiple case
studies conducted across various countries. The analysis consists in identifying
and detailing the determining factors for the successful digital transformation
of public services, in order to support the implementation of new initiatives.
The results highlight factors such as strong leadership and governance,
availability of technological infrastructure, security and confidentiality of
systems, as well as quality of systems, information, services and ease of use.
In conclusion, this study proposes a structured framework for the implementation
of sustainable and scalable digital public services, and enhancing the
inclusiveness of public services. |
|
Keywords: |
Key Success Factors, Determinants, Digital Transformation, Public Services,
E-Government. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
UTILIZING BIG DATA FOR SOCIAL MEDIA TREND FORECASTING AND INFLUENCE ANALYSIS |
|
Author: |
SHOBANA GORINTLA, Dr. G SUMALATHA, S ARUNA, NAGASIVA JYOTHI KOMPALLI, PALAMAKULA
RAMESH BABU, SUNEETHA BANDEELA |
|
Abstract: |
Social media has become a vital tool for shaping public opinion, political
debate, and consumer choice- making trend predictions and influencer analysis
have never been more critical for businesses, policymakers and researchers. This
paper introduces a new methodology of combining sentiment analysis, machine
learning and network analysis to predict social media trends and identify
cultural influencers. We acquire information from Twitter, Instagram, and
Facebook using sophisticated NLP and machine learning models such as LSTM for
Sentiment Classification, and ARIMA for Trend Prediction. A mixture of
sentiment, user engagement and network centrality for the influential user is
introduced. We have presented the results, showing the superiority of our hybrid
approach against the traditional ones, with the MAE of trend forecasting for
0.145 and the influencer recommendation F1-score of 86.2%. The results
demonstrate the capability of integrating sentiment analysis and
influencer detection to enhance trend prediction and influence analysis. The
methodology and its implementation provide useful tools for Real-time Social
Media Analysis. They have applications in marketing, politics and social science
by giving stakeholders control over social media dynamics. |
|
Keywords: |
Social Media, Trend Forecasting, Sentiment Analysis, Influencer
Identification, Machine Learning, Network Analysis |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
A LIGHTWEIGHT U-NET WITH SEPARABLE CONVOLUTIONS FOR EFFICIENT LUNG SEGMENTATION
IN REAL-TIME MEDICAL IMAGING |
|
Author: |
SALEH ALGHAMDI, MOHAMMAD SHORFUZZAMAN |
|
Abstract: |
Accurate and efficient segmentation of the lung regions is indispensable for
detecting and managing pulmonary diseases, as it allows clinicians to identify
abnormalities and plan effective intervention strategies. However, the high
computational demands of many existing segmentation models pose a significant
challenge, particularly for deployment in resource-constrained environments such
as mobile, edge platforms, and point-of-care devices. Lung segmentation is
further challenged by wide anatomical variability and imaging artifacts, which
existing models often struggle to handle without access to large-scale hardware.
This study addresses this limitation by introducing a lightweight U-Net
architecture that integrates depthwise separable convolutions to reduce
computational complexity while preserving segmentation accuracy. By replacing
standard convolutional layers, the model achieves faster inference and
significantly lower parameter counts, making it well-suited for IT applications
in embedded systems and clinical informatics. The model was evaluated on the
publicly available Pulmonary Chest X-Ray Defect Detection dataset from Kaggle,
demonstrating its effectiveness in segmenting lung regions. The performance
evaluation shows that our model delivers outstanding results, attaining a Dice
score of 91.92%, a Jaccard index of 82.75%, precision of 92.64%, recall of
90.31%, and accuracy of 97.12% on the test dataset. These results highlight that
the lightweight U-Net achieves state-of-the-art segmentation accuracy with
significantly reduced computational overhead, making it ideal IT solution for
real-time use in clinical workflows and deployment on limited-resource devices. |
|
Keywords: |
Lung Segmentation, Depthwise Separable Convolutions, Lightweight U-Net, Chest
X-ray Analysis, Real-Time Inference |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID MODEL FOR ASSESSING THE USABILITY OF A UNIVERSITY E-LEARNING PLATFORM: A
CASE STUDY OF THE I-UH2C SYSTEM |
|
Author: |
MINA NKAILI, MAJIDA LAAZIRI, MOHAMED AZOUAZI |
|
Abstract: |
Promoting distance learning is essential to support students fresh from high
school in their transition to full autonomy, but this transition must be
implemented in practice. Gradually integrating distance learning into their
curriculum is an effective approach to developing their self-learning skills.
This method enables them to learn how to manage their time and assess
themselves, thereby strengthening their autonomy and their ability to adapt to a
variety of learning situations. This research aims to assess the usability
and measurement of success and quality of learning platforms by developing a new
evaluation approach based on two models and using a set of criteria developed
from a quantitative and qualitative research methodology that was used at the
Hassan II University in Casablanca and applied to the i-UH2C e-learning
platform. The results obtained underline the importance of implementing an
evaluation model for e-learning platforms to help the university achieve these
objectives and improve the performance of such distance learning platform to
meet the needs of learners. These results also encourage a better understanding
of the positive aspects of this approach, such as active engagement in distance
learning, ease of use of the platforms, interactivity and satisfaction endorsed
by the majority of students, as well as areas that could be improved, including
infrastructure, socio-economic disparities, and the learner’s socio-cultural
environment. |
|
Keywords: |
Distance learning, platform, learning, evaluation, usability, quality, i-UH2C. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2025 -- Vol. 103. No. 15-- 2025 |
|
Full
Text |
|
|
|