|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
March 2026 | Vol. 104
No.5 |
|
Title: |
DOLLMAKER GIANT ARMADILLO OPTIMIZATION ENABLED VISION TRANSFORMER CONVOLUTIONAL
FORWARD HARMONIC NET FOR STRESS DETECTION USING PPG SIGNAL |
|
Author: |
C.S.L VIJAYA DURGA, J.MANIMARAN, M PURUSHOTHAM REDDY |
|
Abstract: |
Stress detection is crucial because early identification allows for timely
intervention that prevents the development of serious health problems related to
prolonged stress. However, existing methods suffer from limited feature
representation, ineffective feature fusion, poor robustness to noise, class
imbalance issues, and suboptimal optimization during model training. Many
conventional machine learning (ML) and deep learning (DL) approaches fail to
capture the complex and nonlinear physiological patterns associated with stress,
leading to reduced generalization and reliability. To address these challenges,
this work proposes a Dollmaker Giant Armadillo Optimization-enabled Vision
Transformer Convolutional Forward Harmonic Net (DGAO_ViTCFHNet) model for
detecting stress using Photoplethysmogram (PPG) signals. Initially, the input
PPG signal is passed into a feature extraction process, and the outcome is
considered as output-1. Moreover, the same input PPG signal undergoes a feature
extraction process, including time-domain-based features, statistical features,
and frequency-based features, and the outcome is considered as output-2. These
extracted features are then fused using the Squared-chord distance with Quantum
Dilated Convolutional Neural Networks (QDCNN). The fused features are then
subjected to data augmentation, which is accomplished by the oversampling
technique. At last, detection is performed using ViTCFHNet, which integrates
Vision Transformer (ViT) and Convolutional Neural Networks (CNN) with a forward
harmonic analysis concept. ViTCFHNet is trained using Dollmaker Giant Armadillo
Optimization (DGAO), which is derived by integrating Dollmaker Optimization
Algorithm (DOA) and Giant Armadillo Optimization (GAO). The effectiveness of
DGAO_ViTCFHNet is analyzed by assuming the metrics, like accuracy, Positive
Predictive Value (PPV), Negative Predictive Value (NPV), True Positive Rate
(TPR), and True Negative Rate (TNR), that obtained superior values of 96.05%,
87.88%, 95.74%, 88.05%, and 87.80%. |
|
Keywords: |
Quantum Dilated Convolutional Neural Networks, Vision Transformer, Convolutional
Neural Networks, Dollmaker Optimization Algorithm, Giant Armadillo Optimization. |
|
DOI: |
https://doi.org/10.5281/zenodo.19127908 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
THREATSCAN - CYBERTHREAT PREDICTION SYSTEM USING TEMPORAL CONVOLUTIONAL NETWORKS |
|
Author: |
THONDEPU ADILAKSHMI, MALLANNAGARI SUNITHA, MADDARAPU PAVAN DURGA NIVAS,
HARSHITHA PALLAPOLU |
|
Abstract: |
The growing digitization of the public and private sectors increases attack
surfaces for cyber threats including cyber denial-of-service (DoS) attacks and
more advanced zero-day exploits. Unlike established Intrusion Detection Systems
(IDSs) that utilize rule-based signatures and conventional machine learning, the
proposed AI-based systems which utilize Temporal Convolutional Networks (TCNs)
anticipates threats by scrutinizing multivariate time-series network traffic and
system log data of an organization. The proposed model leverages the
Cybersecurity Threat Detection and Awareness Program dataset which includes
real-world network event data associated with flow logs, IDS alerts, firewall
telemetry, and threat intelligence feeds from India. The model evaluates threats
and anticipates cyber-attack vectors within the scope of multi-class
classification and anomaly detection. The choice of TCNs is due to their ability
to model long-range temporal features that is essential in recognizing and
predicting attacks that evolve over time. The model predicts and classifies
attack types, and signals a detection of an anomaly using five levels (No
Threat, Low, Medium, High, and Critical). TCN provides security analytics and
threat intelligence convergence which is instrumental to achieving a higher
efficacy of incident response, real-time anomaly detection and risk scoring help
security teams prioritize responses and mitigate risks before threats escalate. |
|
Keywords: |
Cyberthreat, Intrusion Detection System, Dual Stacked Temporal Convolutional
Network, Temporal Convolution Network, Machine Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19128006 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
DESIGNING FOR USABILITY: AN AI-HUMAN HYBRID MENTAL HEALTH COUNSELING WEB APP FOR
STUDENTS |
|
Author: |
ELGA THERESIA, TANTY OKTAVIA |
|
Abstract: |
The significant rise in demand for mental health support among students has
strained counselors and psychologists’ capacity to serve, necessitating
innovative approaches that extend access for the students while reducing
professional workload. AI is an increasingly explored technology to solve this
issue. This research investigates and formulates the user experience of an
AI-Human hybrid mental health counseling web app for students. A mixed-methods
design was adopted. It consisted of in-depth interviews with students and mental
health professionals to elicit user-centered insights, and an online survey
using the Kano model and conjoint analysis to prioritize system features.
Findings consistently highlight support for a hybrid human–AI model. Both
students in higher education and mental health professionals recognized AI’s
potential as a complementary tool, particularly in reducing administrative
tasks. Participants favored the use of AI for onboarding clients and collecting
initial information, thereby enabling counselors to dedicate more time to
meaningful dialogues. At the end of sessions, AI was also viewed as valuable in
generating summaries and supporting post-session analysis. However, attitudes
towards AI use during live counseling were more cautious. This research
contributes conceptual and empirical foundations for the development of an
AI-enhanced solution that preserves human connection while improving efficiency
and quality of care. |
|
Keywords: |
Artificial Intelligence, Mental Health, User-Centered Design, Counseling,
Human-Computer Interaction |
|
DOI: |
https://doi.org/10.5281/zenodo.19128056 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
REVOLUTIONIZING WASTE CLASSIFICATION THROUGH MULTIMODAL CONDITIONAL GANS WITH
ROBUST REGULARIZATION |
|
Author: |
NANDIKANTE SHRAVANI, Dr. S. VIGNESHWARI |
|
Abstract: |
This study proposed a novel framework for automated waste classification by
employing an enhanced Conditional Generative Adversarial Network (cGAN)
integrated with multimodal data fusion using thermal and hyperspectral imagery.
To address the persistent challenge of adversarial training instability,
spectral normalization and gradient penalty regularization were incorporated,
which ensured stable convergence and robust feature learning. The proposed
framework not only generated high-quality and diverse synthetic waste samples
but also resulted in measurable improvements in downstream classification
performance. Experimental evaluations conducted on public benchmark datasets
demonstrated an approximate 15% improvement in classification accuracy and a 20%
enhancement in robustness compared to conventional approaches. The results
highlighted the effectiveness of combining multimodal sensing with advanced
generative modeling for improving discrimination capability under data-scarce
and variable conditions. Overall, the findings established the proposed approach
as a scalable and reliable solution for accurate waste categorization,
underscoring the potential of multimodal machine learning frameworks in
supporting environmentally sustainable waste management systems. |
|
Keywords: |
Conditional Generative Adversarial Networks; Waste Classification; Multimodal
Data; Regularization Techniques; Spectral Normalization; Gradient Penalties;
Environmental Sustainability; Machine Learning; Hyperspectral Imaging; Thermal
Imaging |
|
DOI: |
https://doi.org/10.5281/zenodo.19128125 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
INTEGRATING ADVERSARIAL AUTOENCODERS WITH GATED RECURRENT UNITS TO IDENTIFY
ANOMALIES IN ECG SIGNALS |
|
Author: |
SHAIK JANBHASHA, VENKATESWARLU SUNKARI, DIPANWITA DEBNATH, VENKAT RAO
PASUPULETI, AKETI NARESH, LAKSHMANARAO TALAPAKULA, JUTU GOPAIAH, ROHTIH BALA
JASWANTH B |
|
Abstract: |
Cardiovascular diseases remain a leading global health concern, necessitating
advanced methods for early detection of cardiac anomalies. This study proposes a
hybrid model combining Adversarial Autoencoders (AAE) and Gated Recurrent Units
(GRU) to improve anomaly detection in Electrocardiogram (ECG) signals. The AAE
module extracts robust latent representations of normal ECG patterns, while the
GRU captures temporal dependencies within the signal. Experimental results
demonstrate superior performance compared to existing methods, achieving 98.6%
accuracy, 97.7% recall, 98.2% F1-score, 98.8% precision, and 99.1% AUC-ROC. The
proposed framework reduces false positives and enhances diagnostic reliability,
offering a promising tool for automated cardiac monitoring. |
|
Keywords: |
ECG, Anomaly Detection, Adversarial Autoencoder, GRU, Deep Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.19128170 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
COGNITIVE ALLOCATOR: AN END-TO-END SECURE AND DYNAMIC VIRTUAL MACHINE ALLOCATION
FRAMEWORK FOR OPTIMIZED RESOURCE MANAGEMENT IN CLOUD DATA CENTERS |
|
Author: |
K SARAVANAN, DR.R.SANTHOSH |
|
Abstract: |
Cloud Computing (CC) offers a ubiquitous service to the Information Technology
(IT) environment with higher resources. However, due to massive usage of cloud
resources the availability of services often faced with service outage, power
consumption, and security issues. Several state-of-the-art works tends to
tradeoff the resource usage optimization and security in cloud Virtual Machines
(VMs). To overcome this issue, we design a secure and effective cloud VM
allocator named “Cognitive Allocator”. The designed Cognitive Allocator model
ensures trade-off among the end-to-end security and resource management in the
cloud computing environment by utilizing Deep Learning (DL), Deep Reinforcement
Learning (DRL), and Optimization algorithms respectively. The entities involved
in the model includes Cloud Users (CUs), Cloud Service Providers (CSPs),
Authentication Server (AS), Edge Server (ES), and Cloud Broker. Firstly, the CUs
and CSPs are authenticated to the AS for guaranteeing the authenticity for
jeopardizing the network traffic attacks. The authenticated CUs are then
restricted by the ESs with its hybrid access control (Role and policy-based
access control) mechanism using Dual Agent DRL (DA-DRL) algorithm. The DA-DRL
algorithm learns the past experience and firmly controls access for the CUs
based on their roles, Service Level Agreement (SLA), and their authenticity.
Finally, we perform secure and optimized VM allocation in the cloud broker
server using DL and optimization algorithm named Attention Classification
Network (AC-Net) and Horse Herd Optimization (H2O) respectively. The AC-Net
classifies the user task into three classes such as public, confidentiality, and
highly sensitive. Based on the classified user task and security parameters, the
H2O algorithm effectively allocates the VM for the CSPs. The proposed work is
implemented using Cloud Sim of version 3.03 with various validation metrics such
as security and privacy analysis, cost execution, resource utilization, power
consumption, and allocation time. From the validation results, the proposed
cognitive allocator model outpaces than the state-of-the-art models. |
|
Keywords: |
Cloud Computing (CC), Virtual Machine (VM), Cloud Service Providers (CSPs), Edge
Server (ES), Deep Learning (DL), and Deep Reinforcement Learning (DRL) |
|
DOI: |
https://doi.org/10.5281/zenodo.19128224 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
EFFECTIVE CBIR BASED ON HYBRID IMAGE FEATURES AND MULTILEVEL APPROACH |
|
Author: |
V. ARCHANA REDDY, V. VIJAYA KUMAR |
|
Abstract: |
The instantaneous search and retrieval of the most relevant images to a specific
query is one of the significant applications of image processing. The process of
retrieval of images using image contents is widely known as content base image
retrieval (CBIR). The image features extracted from the local windows of 3*3
resulted good results in CBIR. However, the micro windows of 2*2 derived the
texton and motif features and played a dominant role in CBIR. This paper
segmented the 3*3 window into two windows namely cross and diagonal windows.
Four directional motifs of 1*3 are extracted from each of the cross and diagonal
segments. The motif features are derived using Rule based Directional motif
(RDM) to address the ambiguity issues. This paper transformed the motif indexes
derived on a 1x3 triangular window to a 3*3 window and derived eight triangular
RDM local units. The local features are extracted by integrating the features
extracted from the eight different images. The extracted features hold
directional, textures, patterns, edge properties derived from the 3*3 and
triangular windows. The results indicate the superiority of the proposed
methods. |
|
Keywords: |
3*3 Window, 1*3 Triangular Window, Directional Motif; |
|
DOI: |
https://doi.org/10.5281/zenodo.19128259 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
HNBK-MR-BILSTM HYBRID MACHINE LEARNING MODEL FOR ENHANCED DIABETIC RETINOPATHY
DETECTION IN FUNDUS IMAGING |
|
Author: |
ALKA SINGH, RAKESH KUMAR, AMIR H GANDOMI |
|
Abstract: |
Utilizing a Computer-Aided Diagnosis (CAD) system of retinal fundus images is
becoming optimal for manual fundus inspection. In addition, CAD of retinal
fundus images was found to be more reliable and necessitated less time for both
processing and analysis. Although several preprocessing techniques have been
proposed, preprocessing remains challenging owing to a high level of contrast in
the retinal vasculature network and image quality. In this work, a method called
Hessian Niblack’s Binarization Keypoint and Multi-scale Retinex Bidirectional
LSTM (HNBK-MR-BiLSTM) for Diabetic Retinopathy detection is proposed. The
HNBK-MR-BiLSTM is split into three sections: preprocessing, feature extraction,
and classification for diabetic retinopathy detection. Based on the notion that
the vessel profile could be modeled by separating into three channels, red,
green and blue, the Hessian Frangi filter tries to model and preprocess the
input images by retaining only the green channel while eliminating the red and
blue channels for better representation of different types in fundus images via
Second-order Partial Derivatives. Second, ophthalmoscopy was removed and
combined the features from retina images based on Niblack’s Threshold by
binarization, as well as the coefficients of the rectangular array of pixels via
PixMap. Finally, Bidirectional Multi-scale Retinex LSTM is applied to the
keypoints detected for accurate classification between five distinct classes of
images. Experimental results show that the proposed HNBK-MR-BiLSTM method for
diabetic retinopathy learns more features of small targets and can efficiently
enhance the classification performance of diabetic retina. The performance
analysis of the proposed method to achieve better accuracy, time, sensitivity,
specificity, and Peak Signal Noise Ratio (PSNR) compared to conventional methods
is described. The results show that the proposed HNBK-MR-BiLSTM method improves
at diagnosing diabetic retinopathy than more existing methods. |
|
Keywords: |
Diabetic Retinopathy, Hessian Frangi, Second-order Partial Derivatives,
Niblack’s Binarization, Keypoint, Nonlinear Feature Extraction, Multi-scale
Retinex, BiLSTM |
|
DOI: |
https://doi.org/10.5281/zenodo.19128318 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
ANALYZING LAGGED CAUSALITY OF MULTIPLE MARKET FACTORS ON BITCOIN PRICE: AN
INTERDISCIPLINARY APPROACH |
|
Author: |
SKY NURIMBA, TANTY OKTAVIA |
|
Abstract: |
The cryptocurrency market, particularly Bitcoin, is characterized by high
volatility influenced by various factors including public sentiment on social
media, trading volume, and psychological market indicators such as the Fear and
Greed Index (FGI). This study analyzes the lagged relationships and directional
causality between Platform X sentiment, Bitcoin trading volume, the FGI, and
Bitcoin price movements through an interdisciplinary approach that integrates
Natural Language Processing and econometric modeling. The novelty of this
research lies in its comprehensive multivariate framework that simultaneously
examines bidirectional causality among four key market variables using optimal
lag selection (10 periods) and the Toda-Yamamoto procedure, addressing critical
gaps in temporal dynamics that previous studies overlooked. Sentiment data from
Platform X was extracted using FinBERT, a model optimized for financial text
analysis, and Vector Autoregression together with Granger Causality Tests were
implemented to identify temporal lag patterns and causal relationships over the
period from March 2022 to March 2023. The results indicate that Bitcoin price
exerts a significant influence on Platform X sentiment, Bitcoin trading volume,
and the FGI, with trading volume being the only variable that also exhibits a
reverse influence on Bitcoin price. These findings suggest that price movements
play a dominant role in shaping sentiment, trading activity, and market
psychology, while trading volume maintains a feedback loop with price. This
study provides both practical and theoretical contributions by offering a
comprehensive framework for understanding the dynamics of the cryptocurrency
market and the interactions among social media sentiment, trading behavior, and
investor psychology in relation to Bitcoin price movements. |
|
Keywords: |
Machine Learning, Sentiment Analysis, FinBERT, Cryptocurrency, Granger Causality |
|
DOI: |
https://doi.org/10.5281/zenodo.19128350 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
AVAILABILITY AND IMPACT OF ARTIFICIAL INTELLIGENCE SYSTEMS ON DECISION MAKING
SUPPORT IN U.S. SMALL AND MEDIUM-SIZED BUSINESSES |
|
Author: |
KATERYNA HALAN |
|
Abstract: |
Artificial intelligence (AI) systems are becoming a critical tool for digital
transformation, but their availability to small and medium-sized businesses
(SMBs) remains limited and uneven. High cost, lack of human resources (HR), and
low digital maturity create barriers that hinder the implementation of
innovations in enterprises in this segment. The aim of the research is to assess
how the availability of AI-based decision support systems (DSS) affects the
likelihood of their implementation in the United States (USA). The study also
determines how such technologies change labour productivity, operating costs,
forecast accuracy, and the speed of management decisions in SMBs. The
methodology is based on panel data from 2022–2024 from The Business Trends and
Outlook Survey (BTOS), Annual Business Survey (ABS), and the Federal Reserve
Small Business Credit Survey. An AI Availability Index (AIAI) is proposed,
created using the principal components method, which combines indicators of
cost, digital maturity, integration, HR, and telecommunications infrastructure.
The results show that availability significantly increases the likelihood of
AI-DSS implementation in SMBs in the USA. Empirical estimates show an increase
in labour productivity by about five percent and a decrease in the share of
operating costs by more than three percentage points. The accuracy of forecasts
improves, and the speed of management decisions increases because of the
integration of AI systems. The effects are significantly stronger for high AIAI
values, confirming the key role of infrastructure and digital maturity.
Practical conclusions include the development of quick-start roadmaps for SMBs
and the implementation of risk management protocols under the NIST AI Risk
Management Framework (RMF). The academic novelty is the creation of a unique
AIAI and conducting a causal analysis of the effects on key performance
indicators (KPIs), taking into account heterogeneity. |
|
Keywords: |
Artificial Intelligence, Decision Support Systems, Small and Medium-Sized
Business, Availability, Digital Transformation, Productivity, Econometric
Modelling |
|
DOI: |
https://doi.org/10.5281/zenodo.19128399 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
IMPROVEMENT OF ANFIS FOR EARLY CARDIOVASCULAR DISEASE PREDICTION MODEL USING
DIFFERENTIAL EVOLUTION (DE) AND LIME EXPLAINABLE AI |
|
Author: |
SRI SUMARLINDA, WIJI LESTARI, FAULINDA ELY NASTITI |
|
Abstract: |
The early detection of cardiovascular disease (CVD) remains a critical challenge
in preventive healthcare, requiring predictive models that are both accurate and
interpretable. This study aims to develop an early CVD prediction model enhanced
with Differential Evolution (DE) optimisation and explainable AI using LIME. The
models were evaluated on a dataset of 500 samples comprising six features: age,
body mass index (BMI), systolic blood pressure, diastolic blood pressure,
cholesterol, and blood sugar. The ANFIS model was improved by optimising its
premise and consequent parameters through mutation, crossover, and selection
processes within the DE algorithm. LIME was employed to provide interpretability
by revealing the contribution of each feature to the ANFIS-DE prediction
outcomes. The dataset was evaluated using three different data-splitting schemes
to identify the most effective training testing proportion. In Model 1, the data
were divided into 60% for training and 40% for testing. Model 2 applied a
70%:30% split, whereas Model 3 used an 80%:20% split. Among these
configurations, Model 3 (80%:20%) yielded the highest predictive performance,
indicating that a larger portion of training data contributed to better model
generalisation and overall accuracy. The enhanced ANFIS-DE model outperformed
the baseline ANFIS, achieving higher testing accuracy (0.9200 vs. 0.9167),
precision (0.9290 vs. 0.9287), recall (0.9250 vs. 0.9250), and F1-score (0.9425
vs. 0.9367), alongside a lower error value (0.2011 vs. 0.2019). LIME analysis
further indicated that blood sugar had the highest contribution (0.27), followed
by systolic blood pressure (0.25), age (0.20), cholesterol (0.06), BMI (0.03),
while diastolic blood pressure exerted a slight negative influence (–0.01),
demonstrating the usefulness of feature-level explanations in supporting early
CVD risk prediction. |
|
Keywords: |
Cardiovascular Disease, ANFIS, Differential Evolution (DE), LIME Explainable AI,
Early Prediction Model |
|
DOI: |
https://doi.org/10.5281/zenodo.19128449 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
PECULIARITIES OF THE USE OF BLOCKCHAIN TECHNOLOGIES IN DOCUMENTING AND
AUTHENTICATION OF EVIDENCE DURING CYBER INVESTIGATIONS |
|
Author: |
VIACHESLAV KULIUSH, VLADYSLAV VEKLYCH, NATALIIA IAKYMCHUK, SERGIY MARCHEVSKYI,
IVAN KURYLIN |
|
Abstract: |
As cybercrimes rise, effective methods to preserve and evaluate digital evidence
are needed. This study examines how blockchain technology might enhance cyber
investigation, digital evidence capture, and verification. This investigation
compares public and permissioned blockchain systems' legal and forensic
reliability. Legal and forensic comparisons were conducted in Estonia, Germany,
Ukraine, and the Netherlands. The inquiry focused on phishing, ransomware, and
data breaches. About 300 instances were examined. The Hyperledger and Ethereum
systems use SHA-256 and Keccak-256 hashing. The vetting method succeeded over
97% across all evidence categories. Specialist studies found a 98% agreement on
the approaches' reliability. Blockchain technology creates verifiable,
transparent, and immutable forensic records. Permissioned blockchains follow all
rules and processes better than public networks. In addition, CipherTrace and
Chainalysis Reactor analyzed 36 cryptocurrency samples. It was found that all
Bitcoin, Ethereum, and stablecoin transactions could be tracked. Secret
cryptocurrencies like Zcash and Monero have not been thoroughly investigated.
The results support the idea that blockchain-based solutions improve digital
evidence credibility and admissibility in court. This work's main contribution
is a repeatable approach. The approach includes legal analysis, subject matter
expert validation, and technological verification. Investigators and courts may
verify digital artifact authenticity, traceability, and transparency using this
technique. |
|
Keywords: |
Blockchain, Cyber Investigations, Digital Evidence, Forensic
Authentication, Data Integrity, Cryptocurrencies, Legal Admissibility |
|
DOI: |
https://doi.org/10.5281/zenodo.19128500 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
THREATDETECTAI: AN AI-POWERED ZERO TRUST FRAMEWORK FOR REAL-TIME THREAT
DETECTION AND CRYPTOGRAPHIC RE-VALIDATION IN CLOUD COMPUTING |
|
Author: |
D.VENKATESWARLU, Dr. B. SATEESH KUMAR |
|
Abstract: |
Cloud computing environments operate at a vast scale and support heterogeneous
user activity, but ensuring integration is a key and significant security
challenge. While a few attempts have been made to improve integrity
verification, most existing integrity verification methodologies rely on a
static cryptographic proof that involves an untrusted element or a rule-based
validator. This independent anomaly detection method fails to detect minute
insider threats, evolutionary attack patterns, or multi-modal correlations
(e.g., log, network flow, and cryptographic traces). Although Zero Trust
architectures employ many of these principles, the continuous validation of
user-level trust and the prevention of information modification in a
multi-tenant infrastructure create blind spots and limitations. This calls for
an adaptive, intelligence-driven framework that integrates threat detection
elements with cryptographic proof-checking. In this work, we introduce
ZeroTrustAI, the first proof framework for AI-integrity, which combines a
cryptographic verification engine with a hybrid deep learning algorithm called
ThreatDetectAI. ThreatDetectAI utilises a CNN–BiLSTM–Attention model to identify
spatial–temporal patterns in user behaviour and produce dynamic threat scores.
Suspicious events are checked by Proofcryptnet (hash, digital signature, Merkle
root). Such a system is integrated, providing a continuous, zero-trust
verification loop with risk-based, adaptive access control. To evaluate the
proposed approach, we conduct extensive experiments with many different data
types, including cloud logs, network telemetry, and cryptographic metadata and
show that it achieves 97.8% accuracy, 97.2% precision, 98.1% recall, and 0.986
AUC, which significantly outperformed the classical (RF, SVM, XGBoost) and deep
learning baselines (CNN-only, LSTM-only, Transformer). The new version, with
built-in cryptographic validation, achieved an 18% decrease in false positives
when deployed for high-volume activity scenarios. We propose a framework that
satisfies the scalability, explainability, and audit-friendliness criteria for
unobtrusive, preemptive confidence assurance at the context-user level, thereby
facilitating continuous authentication, adaptive authorisation, and strong,
defence-grade countermeasures against adverse events in agile cloud settings.
These results showcase the practical use of ZeroTrustAI to bolster protection in
modern enterprise cloud environments. |
|
Keywords: |
Zero Trust Security, Deep Learning Threat Detection, Cryptographic Integrity
Proof, Cloud Security Framework, User Behaviour Analytics |
|
DOI: |
https://doi.org/10.5281/zenodo.19133011 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCED STATIC DEEPFAKE DETECTION THROUGH FACE SEGMENTATION AND
TRANSFORMER-BASED MODELING |
|
Author: |
SAURABH KUMAR JAIN,MOHD AKBAR |
|
Abstract: |
The introduction of advanced generative models such as Generative Adversarial
Networks (GANs) and diffusion architectures has made it easier to create
hyper-realistic fake or tampered facial images. This advancement is very
challenging to ensure the reliability of visual content in digital media.
Detection of fake images is difficult and a major problem for issues related to
privacy, security, and trust on the online platforms. The work proposed in this
article demonstrates an integrated framework for real and fake face image
detection that combines the advanced image preprocessing method with the Vision
Transformer (ViT) based detection model. The image processing stage is applied
to remove the backgrounds and isolate key facial regions. This process reduces
unwanted objects and segments the discriminative facial features. The Vision
Transformer restructures the face images into sequences of non-overlapping
patches and applies global self-attention to capture subtle structural and
textural inconsistencies that distinguish real and fake face images.
Experimental results are presented that compare the performance of background
removal, which helps to increase detection accuracy compared to unprocessed
images. The proposed approach demonstrates strong performance on a very diverse
dataset and successfully highlights the potential of transformer-based
architectures for providing a scalable, interpretable, and robust deepfake
detection method. |
|
Keywords: |
Deepfake, Vision Transformer, Face Detection, Image Segmentation, Data Mining |
|
DOI: |
https://doi.org/10.5281/zenodo.19133063 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF AN AI-BASED PLATFORM FOR RECOMMENDING PROGRAMMING INSTRUCTION
PLANS |
|
Author: |
SUPAPAT THANTURANON , SUWUT TUMTHONG |
|
Abstract: |
This research aims to develop and evaluate the effectiveness of AI-PINS
(AI-based Programming Instruction Navigation System), a platform for
recommending programming lesson plans for primary school teachers in municipal
schools. It is designed within a six-theoretical framework: System Theory
(IPOF), Agent Theory, Experiential & Adaptive Learning, Explainable AI (XAI),
Information Theory, and Cognitive Load Theory. This platform consists of an
AI-Agent Core that continuously recognises, analyses, and makes decisions based
on real teaching data. The system applies DKT, CBR, RAG, Reinforcement Learning,
and intelligent recommendation techniques to generate lesson plans that align
with OBEC indicators and learner contexts. The evaluation consists of content
validity (IOC ≥ 0.80), evaluation by 15 experts, statistical analysis of the
mean, standard deviation, IOC, and Cronbach’s Alpha ≥ 0.90, and the application
of TAM 4.0 to measure acceptability. The evaluation results indicated that the
system is most appropriate (x ̅ = 4.67, S.D. = 0.13) especially in the
dimensions of Intelligence Quality and Decision Quality (x ̅ = 4.65 and 4.80,
respectively). This is due to the efficiency of ML, NLP, RAG, and the
Explainable Decision module, which includes XAI to explain to users the reasons
for recommending learning management plans and selecting activities through the
Dashboard. Regarding the necessity of XAI in the education system, the system
also has the potential for continuous learning through MLOps Monitoring and is
user-friendly (x ̅ = 4.53), supporting TAM 4.0 factors in municipal schools. In
summary, AI-PINS is a robust, well-evaluated architecture that helps reduce
teachers' workload, supports personalised learning, and has the potential to
expand digital learning policies at the national level. |
|
Keywords: |
AI-Based Platform, Artificial Intelligence, Learning Management Plan, Primary
School Teacher, Programming Fundamentals |
|
DOI: |
https://doi.org/10.5281/zenodo.19133102 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
CAUSAL FOUNDATION MODELS FOR ONCOLOGY: A TEMPORAL MULTI-MODAL FRAMEWORK FOR
COUNTERFACTUAL PROGNOSIS AND TREATMENT RESPONSE IN GLIOBLASTOMA |
|
Author: |
RAMAKRISHNA KOLIKIPOGU, Dr. K.V.VIVEKANANDA, CHOPPA.ANANDA KUMAR REDDY, THANU
KURIAN, AMIT VERMA, Dr.R.SENTHAMIL SELVAN |
|
Abstract: |
Modern clinical AI systems are dominated by correlative models that predict
outcomes from observed patterns but typically fail to provide reliable answers
to interventional “what-if” questions required for treatment selection and
policy evaluation. Such limitations reduce robustness when clinical practice,
treatments, or patient subgroups shift. The study develops a causal multi-modal
foundation model to estimate prognosis and treatment response for glioblastoma
multiforme (GBM), enabling counterfactual reasoning over longitudinal treatment
timelines. The present study compares three model classes: (i) Baseline - a
standard multi-modal transformer using pre-trained encoders (UMME); (ii)
Advanced - a causal multi-modal foundation model that injects structural causal
constraints and causal regularization during training; and (iii) Proposed - a
Temporal Causal Multi-Modal Transformer (TCMMT) that models temporal treatment
patterns, integrates dynamic multi-modal diagnostics, and contains a
counterfactual head for intervention simulation. Training data were drawn from
TCGA-GBM and longitudinal institutional cohorts with curated treatment timelines
and censoring-aware labels. The proposed TCMMT achieved a concordance index
(C-index) of 0.87 for survival prediction and improved counterfactual treatment
response prediction accuracy by 23% relative to non-causal baselines (relative
improvement). By combining causal graph structure, temporal encoding of
treatment histories, and multi-modal foundation encoders, TCMMT supports
in-silico clinical trials and personalized treatment optimization, marking a
shift from purely predictive to prescriptive AI in oncology. |
|
Keywords: |
Causal AI, Foundation models, Temporal multi-modal learning, Counterfactual
reasoning, Glioblastoma multiforme (GBM), Structural causal model (SCM). |
|
DOI: |
https://doi.org/10.5281/zenodo.19133154 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
IOMT- BASED SYSTEM FOR EARLY DETECTION OF CARDIAC ISSUES: CORE FEATURES AND
DESIGN |
|
Author: |
VIDHYA G , VIJAY BHANU |
|
Abstract: |
Cardiovascular diseases are one of the main reasons for the deaths in the world,
many people mostly because of the early or short-time changes in heartbeat are
not going to identified quickly. The Internet of Things (IoMT) helps us in
watching the patient’s heart condition continuously, but the thing is in the
existing systems face a lot of problems like delay in giving real-time results,
weak data protection, and also high-power use in wearable devices. In this work,
we are going to introduce a secure edge-based IoMT system that can find heart
problems early by using ECG data. The model uses a small and efficient CNN so
that can work even on low power devices and can detect arrhythmia, tachycardia,
and bradycardia within one second since the processing happens at the edge
instead of the cloud. To keep the data safe, we have used AES and TLS
encryption, which follow HIPAA and GDPR privacy rules. The main goal is to make
the system practical by keep a proper balance between the speed, security, and
energy saving. Unlike the systems that depend too much on cloud or blockchain,
this method works near the patient, so it gives fast results with less delay of
time. By comparing the quick edge processing, lightweight machine learning, and
strong encryption, the above work builds a base for future IOMT systems that are
safe, reliable, and ready for real-world use. |
|
Keywords: |
Cardiovascular Diseases, Internet of Things (IoMT), Wearable Devices,
Encryption, Blockchain |
|
DOI: |
https://doi.org/10.5281/zenodo.19133190 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING-DRIVEN FLEXIBLE BIOSENSOR SYSTEM FOR CONTINUOUS HEALTH MONITORING
AND EARLY DISEASE DETECTION |
|
Author: |
SIVA SANKAR NAMANI, Dr. TAVITI NAIDU GONGADA, CHOPPA.ANANDA KUMAR REDDY, Dr.
MAHAVIR A. DEVMANE, AMIT VERMA, Dr.R.SENTHAMIL SELVAN |
|
Abstract: |
There is a fundamental accuracy efficiency trade-off between continuous
low-burden physiological monitoring with flexible patches: server-bound
clinical-grade models, and on-device methods of achieving sensitivity to meet
energy constraints. To obtain diagnostic performance on microcontroller-class
wearables similar to that of servers, this study presents a Conv-Transformer-GNN
architecture that is ready for deployment. The system incorporates multimodal
sensor fusion, knowledge distillation, and federated aggregation. The main
achievement is the proof that graph-based multimodal fusion with
quantization-aware distillation may maintain clinical accuracy while fitting
within tight memory, latency, and energy budgets. The distilled edge model
closely follows the instructor in terms of experimental outcomes (ROC-AUC 0.89
vs. 0.92), while having a small footprint of 340 KB, an inference energy of
approximately 4.5 mJ, and a p95 latency of about 85 ms. These results provide
useful guidelines for the development of adaptable biosensor systems that can
scale while protecting users' privacy. |
|
Keywords: |
Flexible Patch; Edge Computing; Conv-Transformer-GNN; Federated Aggregation;
Knowledge Distillation. |
|
DOI: |
https://doi.org/10.5281/zenodo.19133210 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
BUILDING USER-CENTRIC AGENTIC SYSTEMS: DESIGN PRINCIPLES AND INTERFACE
CONSIDERATIONS FOR AI-DRIVEN DECISION SUPPORT |
|
Author: |
RAJA MOHD TARIQI RAJA LOPE AHMAD, MUHAMMAD FAIRUZ ABD RAUF, RITA WONG MEE MEE,
NUR AMLYA ABD MAJID, SHARIFAH AISHAH SYED ALI, FAZILATULAILI ALI, ZURAINI
ZAINOL2, MOHD FAHMI MOHAMAD AMRAN |
|
Abstract: |
Artificial intelligence systems are capable of analysing complex data and
generating valuable insights, yet their impact is reduced when users are unable
to understand or trust the recommendations produced. This paper introduces a
user centred approach to the design of artificial intelligence interfaces that
aim to make complex insights both accessible and trustworthy. Using the
AutoAgentic financial analysis platform as a case study, we illustrate how
careful interface design can connect advanced artificial intelligence
capabilities with practical human usability. Through an iterative process of
design and evaluation, interface solutions were developed and tested to support
the visualisation of market regimes, explanation of system recommendations,
communication of confidence levels and provision of appropriate user control.
The results indicate that visual simplicity, layered information presentation,
plain language explanations and meaningful user control substantially enhance
user understanding and trust. Based on these findings, four reusable design
patterns are proposed, namely Confidence Thermometer, Why This Recommendation,
Progressive Disclosure and Plain Language Translator, which can be applied
across a wide range of artificial intelligence decision support systems. This
study contributes practical guidelines for the design of artificial intelligence
interfaces that effectively support human decision making through transparent,
understandable and trustworthy interactions. |
|
Keywords: |
User Interface Design, Explainable AI, Decision Support Systems, Human-AI
Interaction, Financial Analytics, Agentic Systems, |
|
DOI: |
https://doi.org/10.5281/zenodo.19133230 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
USE OF DIGITAL PUBLIC GOVERNANCE TOOLS IN A STATE OF EMERGENCY TAKING INTO
ACCOUNT ADMINISTRATIVE AND LEGAL RESTRICTIONS |
|
Author: |
IRYNA MURAVIOVA, OLEKSANDR YERMAK, VIKTORIIA KORETSKA, VALERIIA RIADINSKA,
NADIIA VASYLENKO |
|
Abstract: |
The article presents the results of a comparative study of the use of digital
public governance tools in a state of emergency using the example of Ukraine,
Germany, and Estonia. The aim is to assess the administrative and legal balance
of digital governance using the integrated Administrative and Legal Balance
Index (ALBI), which covers efficiency (Eff), transparency (Transp), legal
compliance (LegComp), and human rights protection (HRProt). The methodology
combines comparative law, statistical and modelling approaches using k-means
clustering, PCA visualization, bootstrap estimation (n = 1000), and scenario
modelling of three management situations (interdepartmental coordination,
electronic identification, citizens’ appeals). The ALBI weights were determined
based on the results of a three-round Delphi survey and AHP-consensus of 27
experts from three countries. The research is based on the analysis of
regulatory acts (Onlinezugangsgesetz, Public Information Act, Government
Regulation No. 105; the laws of Ukraine “On the Legal Regime of Martial Law”,
“On Electronic Trust Services”) and the aggregated indicators DESI, EGDI, and
Rule of Law Index. It was found that Germany has the highest level of balance
(ALBI = 0.86) due to the combination of technological maturity with effective
judicial control. Estonia demonstrates maximum efficiency (Eff = 0.88) and
regulatory stability due to X-Road and the data once only principle. Ukraine is
characterized by high rates of digitalization (Eff = 0.78) with lower values of
transparency (Transp = 0.61) and human rights protection (HRProt = 0.59). It was
established that the balance of digital governance critically depends on
harmonization with Regulation (EU) 2024/1183 (eIDAS 2.0), Recommendation
CM/Rec(2018)7 of the Council of Europe, and the OECD Digital Government
Framework. The academic novelty is the developed lawful-by-design model that
integrates digital solutions with administrative and legal procedures at the
design stage. Further research prospects for are related to the creation of
adaptive crisis management systems capable of automatically maintaining a
balance between speed, legality, and protection of citizens’ rights in emergency
legal regimes. |
|
Keywords: |
Digital Public Governance, E-Governance, Administrative Law, Rule Of Law, Human
Rights, Sustainable Institutions |
|
DOI: |
https://doi.org/10.5281/zenodo.19136512 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
7G WIRELESS NETWORKS: HARNESSING TERAHERTZ-TO-LIGHTWAVE FOR ULTIMATE CAPACITY |
|
Author: |
SUBRAMANYA SARMA S, DR D. NAGARAJU, BHUVANESWARI S, PRAVEEN KUMAR,GOPAGON,
K.DEEPTHI, VEERASWAMY AMMISETTY, DR.SURESH KUMAR, PITTALA , DR.T.VENGATESH,
M.L.M.PRASAD, DR.BH.KRISHNA MOHAN |
|
Abstract: |
The exponential growth in data traffic, driven by immersive applications like
holographic communications, volumetric video, and the tactile internet, is
rapidly pushing the Shannon limit of current sub-6 GHz and millimeter-wave
(mmWave) spectrums. This paper explores the foundational role of Terahertz (THz:
0.1-10 THz) and Light wave Communication (Li-Fi, Free-Space Optical - FSO) bands
as the cornerstone for the seventh-generation (7G) wireless networks. We propose
a novel, hierarchical network architecture that seamlessly integrates THz bands
for ultra-dense, short-range access and light wave carriers for high-capacity
backbone and front haul links. A comprehensive literature survey establishes the
current state-of-the-art, highlighting the complementary strengths and
limitations of both technologies. We then detail a proposed system methodology,
including a hybrid beam forming and intelligent reflecting surface (IRS)
assisted THz system, coupled with a multi-wavelength FSO backbone. Simulation
results, based on realistic channel models and datasets, demonstrate that the
proposed THz-to-Light wave architecture can achieve aggregate data rates
exceeding 1 Tbps per access point and latency below 100 μs, thereby supporting
the key performance indicators (KPIs) envisioned for 7G. The paper concludes
with a discussion on implementation challenges, including mobility management
and atmospheric effects, and outlines a path forward for realizing the ultimate
capacity of wireless networks |
|
Keywords: |
7G, Terahertz (THz), Light wave Communication, Li-Fi, FSO, Hybrid Beam forming,
Intelligent Reflecting Surface (IRS), Tbps |
|
DOI: |
https://doi.org/10.5281/zenodo.19136530 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
DIGITAL AND AI TOOLS IN STATE FINANCIAL POLICY: EUROPEAN EXPERIENCE AND A CASE
FOR UKRAINE |
|
Author: |
HENNADII ZHUYKOV, SVITLANA PROKHORCHUK, DMYTRO MAZUR, YURI KULYNYCH, HALINA
MAZUR, OLEKSANDR MAZUR |
|
Abstract: |
In the current conditions of global digital transformation, public finances face
challenges and opportunities associated with the introduction of digital
technologies and artificial intelligence (AI). The relevance of the study is due
to the need to assess how digitalization and AI affect the effectiveness of
financial policy in different countries, especially in the context of the
challenges that Ukraine faces when reforming public finances in a period of
economic and social change. After all, a high level of digital services and the
active use of e-government do not always translate into increased fiscal
transparency, budget discipline, and the effectiveness of financial policy. The
article carries out a comprehensive interdisciplinary analysis of the impact of
digital instruments on financial policy using the example of Ukraine, its
neighbors from the European Union (Poland, Slovakia, Hungary, Romania) and
Turkey, a candidate country with active digitalization. The methodology is based
on a systematic combination of quantitative and qualitative indicators - from
the share of the ICT sector in gross value added, the volume of public and
private funding of scientific research, the level of digital skills of the
population and the use of AI, to indices of e-government development and
transparency of the budget process. Particular attention is paid to imbalances
between the digital skills of the population, the institutional capacity of the
state, the innovative activity of business and the quality of budget management.
The results revealed significant differences in the level of digital readiness,
innovation potential and institutional capacity of countries to use digital
technologies in financial policy. Ukraine is characterized by high demand for
digital services, but has limitations in the human resources of researchers and
weak institutional support for transparency and citizen participation. The
comparative analysis confirmed that to increase the effectiveness of financial
policy, a transition from the simple provision of digital services to a
comprehensive institutional transformation that integrates AI and digital tools
into all stages of the budget cycle is necessary. In addition, the most
effective impact of digitalization on financial policy is observed in countries
with a high level of digital skills, a developed R&D infrastructure, a powerful
ICT sector, and balanced indicators of budget transparency and citizen
participation (Poland, Slovakia). On the other hand, in Ukraine, despite the
high digital activity of citizens and the dynamic development of e-government,
the effect of digitalization is restrained by insufficient institutional
capacity, a low level of budget transparency, a weak innovation sector and a
decrease in human scientific and technological potential. The practical
significance of the work lies in the formulation of recommendations for Ukraine
on the adaptation of European models of digital transformation of finance, which
can become the basis for building a transparent, accountable and efficient
financial system. The study is also useful for other countries with similar
socio-economic challenges that seek to integrate innovations into public
financial management. |
|
Keywords: |
Digitalization, Artificial Intelligence, Financial Policy, E-Government, Digital
Skills, Budget Transparency, Institutional Capacity, Public Finances, Ukraine,
European Union, Turkey. |
|
DOI: |
https://doi.org/10.5281/zenodo.19136567 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
INTEGRATION OF TECHNOLOGIES INTO THE PRIMARY SCHOOL LEARNING PROCESS: BENEFITS
AND CHALLENGES |
|
Author: |
VIKTORIIA PAVELKO, ALINA PREDYK, IRYNA RADCHENIA, YULIA RYABOKIN |
|
Abstract: |
The need to integrate modern innovative tools into the formation of primary
school students’ competencies dictates the need to change approaches and tools
for the educational process. The goal is to analyze the current challenges and
advantages of using digital technologies in primary schools based on theoretical
research and practical tools. Research methods. The paper considers innovative
pedagogical tools that have been developed based on modern digital solutions, in
particular online platforms and other interactive tools, as well as blended
learning technologies for primary school age children. An experiment was
conducted using pedagogical experimentation and virtual technologies. The
results of the experiment were used in writing this study. In particular,
emphasis was placed on the main advantages of innovative technologies that
contribute to increasing the interest of young students, facilitate the learning
process, and improve the quality of information assimilation. At the same time,
interactive technologies allow for the most effective study of educational
material in inclusive learning spaces. However, it is worth noting the
possibility of information overload, decreased attention and concentration,
increased risks of negative effects on children’s health, and deterioration of
social interaction. It has been proven that the use of visualization tools makes
it possible to explain complex circumstances and abstract concepts to children,
which improves the effectiveness of perception, engages the sensory and
emotional spheres, and promotes critical thinking. |
|
Keywords: |
Interactive Technologies, Innovations, Natural Sciences, Future Teachers,
Primary Education, Teaching Methodologies |
|
DOI: |
https://doi.org/10.5281/zenodo.19136592 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
REAL-TIME CONTEXT AWARE VARIABLE STRENGTH CRYPTOGRAPHY FOR CLOUD STORAGE |
|
Author: |
Dr.A.BARAKATH BEGAM, Dr.A.BHUVANESHWARI, Dr.J.JENIFER, Dr.S.VEERAPANDI,
Dr.S.MURUGANANDAM, PRAJWALASIMHA S N, POTHUMARTHI SRIDEVI, Dr.L.THENMOZHI,
DR.T.VENGATESH |
|
Abstract: |
In response to the escalating dependence of individuals and enterprises on cloud
storage solutions, the demand for robust data protection mechanisms has become
increasingly imperative. Conventional cryptographic techniques typically use a
fixed level of encryption strength, which can either waste resources or fail to
provide adequate protection by not adjusting to different levels of data
sensitivity or evolving security threats. This paper introduces an innovative
solution called Real-time Context-Aware Variable Strength Cryptography (RTCAVSC)
for cloud storage. RTCAVSC automatically modifies encryption strength in real
time based on various contextual factors, such as how sensitive the data is,
patterns in user access and the prevailing security conditions. Fuzzy Real-time
Sensitivity Tracer, Dynamic Cloud Environment Interpreter and Variable Strength
Cryptography Manager are the novel contributed modules assembled unison in
RTCAVSC work. A dedicated cloud server is leased to evaluate the performance of
RTCAVSC method in terms of Encryption Time, Decryption Time, Throughput,
Latency, Resource Utilization and Data Security parameters in real-time
environment. |
|
Keywords: |
Context Aware, Cloud Data Security, Real-Time Variable Strength Cryptography,
Variable Strength Cryptography |
|
DOI: |
https://doi.org/10.5281/zenodo.19136602 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
XMGNET: A CROSS-MODALITY GRAPH NEURAL NETWORK WITH MODALITY-AWARE RESIDUAL
ALIGNMENT FOR MULTIMODAL EMOTION RECOGNITION |
|
Author: |
RAMAPRAKASH KALAPALA, MAHESH YADLAPATI, S. VIJAYA NIRMALA, VINOD GOJE, VIJAYA
BHASKAR SADHU, DR. B. NARENDRA KUMAR |
|
Abstract: |
Emotion estimation from multimodal information is now the focus of affective
computing, especially for mental health applications, adaptive user interfaces,
and human-robot interaction. Fusion of multimodal biosignals, though, is still a
main concern owing to varying temporal patterns, varying levels of signal
quality, and the lack of practical inter-modal alignment techniques. This work
proposes XMGNet (Cross-Modality Graph Neural Network), a new graph-attentional
model that represents emotion as an intermodal joint function of spatial,
temporal, and semantic relationships between modalities. Our model builds
dynamic graphs for each of the modalities—EEG, ECG, GSR, and facial
expressions—prior to modality-wise graph convolutions blended with multi-head
cross-attention for hierarchical fusion. To enhance robustness and
generalization, we present a Modality-Aware Residual Alignment (MARA) block that
adjusts at runtime to missing or corrupted channels. We test XMGNet on three
publicly released datasets: DEAP, DREAMER, and MAHNOB-HCI, with state-of-the-art
accuracy of 94.3%, 91.7%, and 89.8%, respectively, performing better than recent
transformer- and LSTM-based models. The proposed model exhibits scalable,
explainable, and high-performance emotion recognition without the need for
handcrafted synchronization, with potential deployment on real-time
emotion-aware systems. |
|
Keywords: |
Multimodal Emotion Recognition; Graph Neural Networks; Cross-Modality Fusion;
EEG; ECG; GSR; Facial Expressions; Modality-Aware Alignment; Affective
Computing; Attention Mechanisms |
|
DOI: |
https://doi.org/10.5281/zenodo.19136622 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
TRI-PARADIGM ANALYSIS OF COMPUTATIONAL COMPLEXITY IN NP-HARD SUDOKU SOLVERS
USING EXPLAINABLE AI |
|
Author: |
RAJAN THANGAMANI, PALLAVI R |
|
Abstract: |
Sudoku is a demanding test of computational complexity and constraint
satisfaction problems. This work develops a comprehensive and replicable
benchmark that reformulates 9×9 Sudoku puzzles under three solver paradigms of
constraint programming (CP), Boolean satisfiability (SAT), and integer
programming (IP) in order to assess efficacy and explainability. For each
paradigm, a parametrically equivalent model is implemented and solved with
common datasets and experiment control conditions. We measure wall-clock running
time, peak memory usage, and paradigm-data related measures of effort in
backtracks or fails in CP, decisions or conflicts with conflict-driven clause
learning(CDCL) in SAT, and branch & bound nodes or linear program relaxations in
IP. Aggregate trends provide empirical evidence consistent with
NP-hard/NP-complete behavior: runtime grows approximately exponentially with
constraint tightness, while memory remains nearly flat for standard grids. An
explicit Explainable Artificial Intelligence (XAI) trade-off emerges. SAT
delivers the lowest runtime via clause learning and pruning; CP yields the most
interpretable reasoning through transparent propagation and traceable search; IP
offers algebraic auditability and flexibility for optimization variants but
incurs higher search effort on pure feasibility. The benchmark, figures, and
tables establish a verifiable claim–evidence link and supply a practical
baseline for integrating explainability metrics into analytic decision
pipelines. The results inform the design of intelligent and explainable solver
architectures, aligning with the aim of Intelligent systems with applications.
We conclude with a roadmap toward hybrid CP–SAT approaches that retain CP-level
interpretability while leveraging SAT-level efficiency. |
|
Keywords: |
Sudoku, NP-Complete, Constraint Programming, SAT, Integer Programming,
Explainable AI |
|
DOI: |
https://doi.org/10.5281/zenodo.19136632 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
INFORMATION TECHNOLOGIES AS A MEANS OF IMPLEMENTING PUBLIC POLICY IN THE FIELD
OF E-DEMOCRACY |
|
Author: |
LILIIA BURACHOK, IVANNA LOMAKA, OKSANA LYPCHUK, RUSLAN IBRAHIMOV, NATALIIA
SHOTURMA |
|
Abstract: |
Information technologies are revolutionizing the landscape of contemporary
public policy by facilitating extensive communication avenues between the state
and its citizens. The purpose of this study is to systematically evaluate the
impact of e-government development on the quality, effectiveness, and
accountability of public administration, as well as its role in reducing
corruption, using empirical methods to identify which aspects of digital
governance exert the strongest influence. The research employed methodologies
such as multiple linear regression, ANOVA, correlation analysis, as well as
radar chart analysis. This paper investigates the impact of e-government on the
quality of public administration, specifically focusing on the indicators "Voice
and Accountability", "Government Effectiveness", and "Control of Corruption". An
analysis encompassing 193 countries worldwide was conducted utilizing data from
2014 to 2022, drawing upon three sub-indices of the E-Government Development
Index: the Human Capital Index, the Online Services Index, and the
Telecommunications Infrastructure Index. The findings revealed that the Voice
and Accountability indicator is significantly positively influenced by both the
Human Capital Index and the Telecommunications Infrastructure Index, with Beta
regression coefficients of 0.35 and 0.25, respectively. Conversely, the Online
Services Index did not emerge as a significant determinant. These results
underscore the critical importance of human capital development and
telecommunications infrastructure in enhancing the efficacy of public
administration and curtailing corruption. In contrast, online services exhibit a
relatively weak impact unless synergized with other factors. The practical
implications of this research suggest that its findings could contribute to
improving strategies for the digital transformation of public administration
through the advancement of human capital and telecommunications infrastructure.
Future research should concentrate on comparing nations with varying levels of
digital advancement to discern localized governance characteristics and examine
the adverse effects of digitalization on democracy. |
|
Keywords: |
Information Technology, Public Policy, E-Government, E-Democracy,
Telecommunications Infrastructure |
|
DOI: |
https://doi.org/10.5281/zenodo.19136654 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
A DESIGN FRAMEWORK FOR PERSONALIZED INTELLIGENT TUTORING SYSTEMS INTEGRATING
KNOWLEDGE TRACING AND RETRIEVAL-AUGMENTED GENERATION |
|
Author: |
POONNAKAN CHAMNANKIJ, EKACHAI NAOWANICH, SUWUT TUMTHONG |
|
Abstract: |
This research presents a study on the design framework of an intelligent,
personalized tutoring system to meet the individual needs of learners,
overcoming the limitations of traditional learning management systems (LMS) that
lack flexibility and brainstorming. The researchers designed a system
architecture based on the Intelligent Tutoring System (ITS) concept, comprising
seven main modules such as pre-assessment, intelligent learner grouping
(AI-Classify), and blended learning, built on a cloud-based database structure.
The developed system applies advanced techniques, including Retrieval-Augmented
Generation (RAG) to increase accuracy and reduce AI hallucinations; Knowledge
Tracing (KT) for continuous analysis of learner knowledge status; and an AI
Chatbot to act as a continuous academic advisor. Expert evaluation of the
conceptual framework's consistency showed an Index of Content Validity (IOC) for
each module ranging from 0.89 to 0.97, with an average of 0.93, exceeding the
acceptable criterion (≥ 0.50), indicating the appropriateness and consistency of
the content with the research objectives. The overall system suitability
assessment was at the highest level (x̅ = 4.71, S.D. = 0.45), and the technical
suitability dimension of the AI innovation had a high average score of 4.73. In
conclusion, this learning support system is designed to support deep learning,
analytical thinking, and learner participation, with the potential to enhance
educational practices aligned with 21st-century skills. |
|
Keywords: |
Intelligent Tutoring Systems, Personalized Learning, Knowledge Tracing,
Retrieval-Augmented Generation, Artificial Intelligence in Education |
|
DOI: |
https://doi.org/10.5281/zenodo.19136667 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSFORMER–GNN–UNET HYBRID DEEP LEARNING FOR SCALABLE WIRELESS RADIO MAPS |
|
Author: |
DR. SASIPRIYA G, N VISWANADHAREDDY, CHOPPA.ANANDA KUMAR REDDY, Dr.M.SUNDAR
RAJAN, AMIT VERMA, Dr.R.SENTHAMIL SELVAN |
|
Abstract: |
Wireless environments are complex and require high-resolution radio maps for
tasks such as indoor localization, spectrum management, and automated planning.
Under severe sparsity, empirical models have trouble preserving fine structural
information and scale inadequately. This paper proposes a Transformer–GNN–UNet
hybrid deep learning approach for scalable wireless radio maps: it fuses global
attention (Transformer), topology-aware reasoning (GNN), and multiscale
convolutional reconstruction (UNet) with physics-informed priors and
probabilistic outputs. Models were evaluated on the Indoor Radio Map Benchmark
and the MLSP 2025 dataset across sampling regimes (0.02%, 0.5%), using RMSE,
SSIM, LPIPS, CRPS, and topological (bottleneck) metrics. The hybrid achieves
RMSE 3.45 dB (95% CI ±0.12) versus 4.18 dB for SAIPP-Net and 6.84 dB for the
3GPP baseline; SSIM improves to 0.912 (vs 0.876), and CRPS drops to 0.95 (vs
1.26); bottleneck distance falls to 0.035 (vs 0.048). Results are statistically
significant (p = 0.001) and consistent across seeds; runtime (~48 ms) supports
near-real-time use. The proposed Transformer–GNN–UNet hybrid offers a scalable,
calibrated, and deployable solution for high-fidelity wireless radio mapping. |
|
Keywords: |
Transformer–GNN–UNet, Wireless Radio Maps, Hybrid Deep Learning, Scalable
Pathloss Prediction, Uncertainty Calibration. |
|
DOI: |
https://doi.org/10.5281/zenodo.19136688 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
FORECASTING SAUDI TOURISM DEMAND FOR VISION 2030: A COMPARATIVE ANALYSIS OF
REGRESSION AND LSTM MODELS |
|
Author: |
MASHEL AL-FEHAID , NAHALH ALGETHAMI |
|
Abstract: |
Tourism is central to Saudi Arabia’s Vision 2030 diversification agenda.
Reliable demand intelligence is needed to plan investment, capacity, and
seasonally targeted interventions. Through substantial investments in
infrastructure, heritage conservation, and digital transformation, Vision 2030
aims to position Saudi Arabia as a leading global tourism destination. Existing
studies rarely integrate driver attribution with robust nonlinear forecasting,
and often under-capture seasonal and pandemic shocks. To address this gap, we
develop a dual-scope framework that (i) explains variation in total tourists
using regression and (ii) forecasts future inflows with deep learning. The
driver model examines tourist type, reason for visit, spending behavior,
religious periods (e.g., Ramadan, Hajj), and pandemic effects. A degree‑3
polynomial regression achieves the highest explanatory fit (R˛ = 0.805), with
reason for visit and tourist type emerging as significant determinants. For
forecasting, we design and compare several LSTM architectures; the best, a
generator–discriminator LSTM, attains testing R˛ = 0.756, capturing nonlinear
and seasonal structure. Together, these results unite driver evidence with
accurate forecasts, supporting targeted marketing, resource allocation, and
season-specific operations in line with Vision 2030. The study contributes an
integrated dual scope framework for Saudi arrivals, domain specific exogenous
design such as religious/pandemic, and comparative deep learning benchmarks
aligned to policy relevant metrics moving beyond determinants only studies,
spending centric ML/ARIMA, and attribution free ensembles. By uniting
interpretable driver evidence with accurate deep‑learning forecasts under Vision
2030, this study offers decision‑grade demand intelligence that the current
literature has not yet provided in a single, Saudi‑specific framework. |
|
Keywords: |
Machine Learning, Deep Learning, Regression Analysis, Long Short-Term Memory,
Tourism Sector. |
|
DOI: |
https://doi.org/10.5281/zenodo.19136702 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT AND EVALUATION OF A CONCEPTUAL FRAMEWORK FOR AN INTELIGENT PROJECT
TOPIC RECOMMENDATION AND TIOMELINE SYSTEM IN UNDERGRADUATE PROJECT COUTSES TO
ENHANCE ALIGNMENT WITH COURSE ENHANCE ALIGNMENT WITH COURSE LEARNING OUTCOMES |
|
Author: |
ANGSANA PHONSUK , SUWUT TUMTHONG |
|
Abstract: |
Undergraduate capstone project development faces critical challenges, including
students' difficulty in selecting topics aligned with course learning outcomes,
inefficient time management, and prolonged topic approval processes that delay
graduation timelines. This research develops and evaluates a conceptual
framework for an intelligent project topic recommendation and timeline
management system integrating Constructive Alignment theory, artificial
intelligence, and project management principles. The framework comprised five
core components data management and integration,intelligent recommendation
system (Hybrid techniques), natural language processing for topic-CLO analysis,
automated timeline planning and continuous evaluation Expert evaluation by nine
specialists across five dimensions showed highly favorable results (mean = 4.50,
SD = 0.56), with System Quality (M = 4.56), Intelligence Quality (M = 4.44),
Decision Quality (M = 4.52), Learning Quality (M = 4.44), and User & Environment
Quality (M = 4.56) all achieving "High" to "Highest" appropriateness levels. The
developed framework has significant potential to address capstone project
challenges and is feasible for practical implementation in educational
institutions. |
|
Keywords: |
Project Topic, Recommendation, Timeline System, Chatbot, Tracking, Course
Learning Outcomes |
|
DOI: |
https://doi.org/10.5281/zenodo.19136729 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
INTEGRATION OF ARTIFICIAL INTELLIGENCE IN PUBLIC GOVERNANCE OF INTELLIGENT
TRANSPORT SYSTEMS |
|
Author: |
GRISHA AMIRKHANYAN, NATALIIA PROTSIUK, VLADYSLAV KARIEV, OLEKSANDR FOMIN, PAVLO
SOLOVII |
|
Abstract: |
Relevance of the research The relevance of the study is determined by the
need to implement cognitive and adaptive artificial intelligence (AI)-based
solutions to increase the efficiency, sustainability, and regulatory
manageability of intelligent transport systems (ITS) in public governance.
Research objective The research objective is to substantiate the
methodological principles of creating an AI-architecture of ITS in public
administration, taking into account cognitive adaptability, interoperability,
and regulatory consistency. Research methods The research employed the
following methods: retrospective stratification, metric-based and model-based
analysis, Unified Modelling Language (UML)-based modelling, structural and
functional optimization, metric-based and model-based verification. Results
A multi-level verification of the AI-Optimized ITS Framework for Public
Governance was carried out through retrospective stratification, multimetric
analysis, UML-based modelling and empirical indicator assessment. Cognitive
inertia, institutional fragmentation and regulatory decentralization of
traditional ITS were identified. Metrically confirmed efficiency increase with
TTR=0.21, QL=0.25, Accuracy=0.93, F1=0.92, ΔCO₂=0.90, FC=0.92, RI=0.89,
CRI=0.91, ATS=0.94. AI architecture demonstrates regulatory traceability,
cognitive adaptability and management scalability. Academic novelty of the
research The academic novelty of the research is the formalization of the
cognitive and adaptive AI-Optimized ITS Framework for Public Governance with the
architecture of Explainable AI (XAI)-transparency, regulatory traceability, and
algorithmic optimization. The integration of cognitive interoperability,
regulatory validation, and indicator efficiency in smart mobility management is
ensured for the first time. Further research prospects Future research
prospects include initiating a controlled field experiment in urban
transportation systems to collect validated field data. This approach will
provide an empirical assessment of the algorithmic accuracy, procedural
resilience, and normative traceability of the proposed framework. |
|
Keywords: |
AI-Optimized ITS, Cognitive Adaptivity, Explainable AI (XAI), Travel Time
Reduction, Queue Length Optimization, CO₂ Emission Minimization, Smart Urban
Mobility |
|
DOI: |
https://doi.org/10.5281/zenodo.19136753 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED DIFFERENTIALLY PRIVATE AUTOENCODERS FOR HEALTH-INSURANCE FRAUD
DETECTION AT SCALE |
|
Author: |
GANESH SHANKAR SARGAM , RAMPRAKASH KALAPALA |
|
Abstract: |
Health-insurance fraud has grown in scale and sophistication, imposing
substantial financial losses and undermining trust in healthcare systems.
Conventional, centrally trained detectors require aggregating sensitive claim
data, increasing breach risk and complicating regulatory compliance. Despite
advances in fraud analytics, a core unresolved challenge remains: enabling
multi-institution collaboration without exposing raw claims data while
preserving detection accuracy. We present a federated anomaly-detection
framework that combines client-side deep autoencoders with Federated Averaging
(FedAvg) and differential privacy (DP), orchestrated on Amazon Web Services
(AWS) for secure, scalable deployment. Participating organizations train locally
on private claims and transmit only DP-protected model updates to a cloud
aggregator (EC2/SageMaker), while encrypted artifacts are persisted in Amazon
S3. Using real insurance-claim records augmented with realistic synthetic fraud
cases, the proposed system achieves 91.2% accuracy and an F1-score of 0.885, and
reduces false-positive rate by 43% relative to a traditional centralized
baseline. Privacy controls lower estimated data-exposure risk by 78% with
minimal communication overhead, enabling near real-time operation across
multiple sites. These results indicate that federated training with formal
privacy protection can maintain strong detection performance while limiting
sensitive data movement. This study contributes empirical evidence that
integrating differential privacy within federated unsupervised anomaly detection
can sustain high detection performance under heterogeneous data distributions
while providing quantifiable privacy–utility trade-offs. Overall, the framework
offers an effective, scalable, and privacy-aware solution for distributed fraud
detection, supporting compliance with regulations such as HIPAA and GDPR and
facilitating collaborative analytics without raw-data sharing. |
|
Keywords: |
Federated Learning; Differential Privacy; Autoencoders; Anomaly Detection;
Health-Insurance Claims; Fraud Detection; Secure Aggregation; Cloud
Orchestration (Aws); Decentralized Machine Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19136763 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
Title: |
A HYBRID DEEP LEARNING-BASED PATH-PLANNING ALGORITHM FOR MULTIFUNCTIONAL
MANIPULATOR-TYPE DISINFECTION ROBOT |
|
Author: |
DR. KOVVURI N BHARGAVI, TANAYA GANGULY, MANOJ KUMAR PADHI, DR.G. JOSE MOSES, DR.
NIRAJ KUMAR, NAGENDAR YAMSANI |
|
Abstract: |
This research focuses on developing an advanced path-planning algorithm for a
multifunctional manipulator-type disinfection robot, incorporating deep learning
techniques to optimize its operational efficiency and adaptability. Moreover,
these robots are used to disinfect surfaces in sensitive environments, such as
public facilities, laboratories, and hospitals. However, optimizing the
performance of robots is challenges because energy usage, and disinfection
coverage time can make dynamic environments and navigating difficult. Also, arm
co-ordination and obstacle avoidance do not provide effective disinfection due
to the changing criteria. To overcome these issues, the developed study reveals
that the hybrid solution which is the integration of Deep learning (DL)
frameworks and path planning strategies enhances efficiency as well as
adaptability. The study's objective is to address the limitations of traditional
algorithms, like A* and Dijkstra’s, which fail in dynamic environments such as
hospitals where obstacles are constantly changing. Moreover, the developed
algorithm finds the better route from the starting location of robot to the
robot target while considering the difficulty and map structures. To overcome
these limitations, a hybrid model is proposed, combining the A* algorithm for
global path planning with Proximal Policy Optimization (PPO), a deep
reinforcement learning method for real-time path adjustments. The methodology
involves training the robot in simulated environments and testing it in
real-world settings such as hospitals. Results show significant improvements in
path efficiency, energy consumption, and collision avoidance, with the proposed
algorithm achieving 98.5% coverage efficiency. The implications of this research
highlight the potential of deep learning to enhance the adaptability and
effectiveness of disinfection robots, especially in dynamic, high-traffic
environments, ensuring thorough sanitation and minimizing energy usage. |
|
Keywords: |
Path Planning, Deep Reinforcement Learning, Disinfection Robots, A*
Algorithm, Proximal Policy Optimization. |
|
DOI: |
https://doi.org/10.5281/zenodo.19136782 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th March 2026 -- Vol. 104. No. 5-- 2026 |
|
Full
Text |
|
|
|