Inteligencia Artificial <p style="text-align: justify;"><span style="color: #000000;"><strong><em><a style="color: #003366; text-decoration: underline;" href="" target="_blank" rel="noopener">Inteligencia Artificial</a></em></strong><span id="result_box" class="" lang="en"> is an international open access journal promoted by <span class="">the Iberoamerican Society of</span> Artificial Intelligence (<a href="">IBERAMIA</a>). </span></span>Since 1997, the journal publishes high-quality original papers reporting theoretical or applied advances in all areas of Artificial Intelligence. <span style="color: rgba(0, 0, 0, 0.87); font-family: 'Noto Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">There are no fees for subscription, publication nor editing tasks<span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">Articles can be written in English, Spanish or Portuguese and <a href="">will be subjected</a> to a double-blind peer review process.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">The journal is abstracted and indexed in several <a href="">data bases</a>. </span></span><br /></span></p> Sociedad Iberoamericana de Inteligencia Artificial (IBERAMIA) en-US Inteligencia Artificial 1137-3601 <p>Open Access publishing.<br />Lic. under <a href="">Creative Commons CC-BY-NC</a><br />Inteligencia Artificial (Ed. IBERAMIA)<br />ISSN: 1988-3064 (on line).<br />(C) IBERAMIA &amp; The Authors</p> AEPIA (Asociación Española para la Inteligencia Artificial). 40 Aniversario (1984 – 2024) <p>En este año se cumplen los primeros cuarenta años de la Asociación Española para la Inteligencia Artificial (AEPIA), cuarenta años de marcha continuada por el camino de la Inteligencia Artificial. Desde aquel lejano 1984, en el que un grupo de investigadores pioneros, liderados por el profesor D. José Cuena, fundador y primer presidente de AEPIA, vieron la necesidad de unir a toda la comunidad científica y profesional para dar a conocer la IA, hasta nuestros días, muchas generaciones han contribuido a la historia de AEPIA construyendo una asociación con muchos logros y muchos retos aún por delante.</p> Felisa Verdejo Francisco Garijo Federico Barber Antonio Bahamonde Amparo Amparo Alonso Alicia Troncoso Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 1 11 Accurate Price prediction by Double Deep Q-Network <p>For more than several decades, time series data have been in the center of attention for scholars to predict the future prices of the markets, the most fundamental and challenging of which has been the prediction of the price of the stock market. It is of great importance to note that the algorithms with the fewest errors in price predictions are more applicable. There have been more methods suggested for price prediction in the stock markets: time series data analysis, mathematical and statistical analysis, signal processing, pattern recognition and machine learning. One of the demerits of the aforementioned methods is failing to recognize sudden change of prices, in this regard, experiencing more errors is the consequence of such demerit. In this regard, to have the error solved, the DDQN algorithm, consisting of deep neural networks which includes LSTM-CNN layers, has been employed. Confronting price fluctuations, the agent has the privilege of having better performance by employing the advantages of LSTM-CNN layers. In this research, the algorithm has been carried out over Iranian Gold Market, including six various types of Gold, from 2009 to 2020. The results reveal the point that the given method is more precise in comparison with other suggested methods confronting sudden changes in prices.</p> Mohammd Reza Feizi Derakhshi Bahram Lotfimanesh Omid Amani Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 12 21 10.4114/intartif.vol27iss74pp12-21 A Novel Deep Learning Model for Pancreas Segmentation: Pascal U-Net <p>A robust and reliable automated organ segmentation from abdomen images is a crucial problem in both quantitative imaging analysis and computer aided diagnosis. Especially, automatic pancreas segmentation from abdomen CT images is most challenging task which based on in two main aspects (1) high variability in anatomy (like as shape, size, etc.) and location across different patients (2) low contrast with neighboring tissues. Due to these reasons, achievement of high accuracies in pancreas segmentation is hard image segmentation problem. In this paper, we propose a novel deep learning model which is convolutional neural network-based model called Pascal U-Net for pancreas segmentation. Performance of the proposed model is evaluated on The Cancer Imaging Archive (TCIA) Pancreas CT database and abdomen CT dataset which is taken from Selcuk University Medicine Faculty Radiology Department. During the experimental studies, k-fold cross-validation method is used. Furthermore, results of the proposed model are compared with results of traditional U-Net. If results obtained by Pascal U-Net and traditional U-net for different batch size and fold number is compared, it can be seen that experiments on both datasets validate the effectiveness of Pascal U-Net model for pancreas segmentation.</p> Ender Kurnaz Rahime Ceylan Mustafa Alper Bozkurt Hakan Cebeci Mustafa Koplay Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 22 36 10.4114/intartif.vol27iss74pp22-36 Age-Invariant Cross-Age Face Verification using Transfer Learning <p>The integration of face verification technology has become indispensable in numerous safety and security software systems. Despite its promising results, the field of face verification encounters significant challenges due to age-related disparities. Human facial characteristics undergo substantial transformations over time, leading to diverse variations including changes in facial texture, morphology, facial hair, and eyeglass adoption. This study presents a pioneering methodology for cross-age face verification, utilizing advanced deep learning techniques to extract resilient and distinctive facial features that are less susceptible to age-related fluctuations. The feature extraction process combines handcrafted features like Local Binary Pattern/Histogram of Oriented Gradients with deep features from MobileNetV2 and VGG-16 networks. As the texture of the facial skin defines the age related characteristic the well-known texture feature extractors like LBP and HoG is preferred. These features are concatenated to achieve fusion, and subsequent layers fine-tune them. Experimental validation utilizing the Cross-Age Celebrity Dataset demonstrates remarkable efficacy, achieving an accuracy of 98.32%.</p> Newlin Shebiah Russel Arivazhagan Selvaraj Dhanya Devi S Dhivyarupini M Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 37 47 10.4114/intartif.vol27iss74pp37-47 FRESHNets: Highly Accurate and Efficient Food Freshness Assessment Based on Deep Convolutional Neural Networks <p>Food freshness classification is a growing concern in the food industry, mainly to protect consumer health and prevent illness and poisoning from consuming spoiled food. Intending to take a significant step towards improving food safety and quality control measures in the industry, this study presents two models based on deep learning for the classification of fruit and vegetable freshness: a robust model and an efficient model. Models’ performance evaluation shows remarkable results; in terms of accuracy, the robust model and the efficient model achieved 97.6% and 94.0% respectively, while in terms of Area Under the Curve (AUC) score, both models achieved more than 99%, with the difference in inference time between each model over 844 images being 13 seconds.</p> Jorge Felix Martínez Pazos Jorge Gulín González David Batard Lorenzo Arturo Orellana García Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 48 61 10.4114/intartif.vol27iss74pp48-61 The Superiority of Fine-tuning over Full-training for the Efficient Diagnosis of COPD from CXR Images <p>This research investigates the use of deep learning for diagnosing lung diseases like Chronic Obstructive Pulmonary Disease (COPD) using Chest X-rays (CXR). The study compares the impact of deep learning on improving these diagnoses by comparing the performance of models trained from the scratch with those enhanced through fine-tuning established architectures like InceptionV3, ResNet50, and VGG-19. The study revealed that fine-tuning pre-trained models offers significant benefits: faster convergence, improved stability, and increased accuracy. Data augmentation techniques were found to be particularly useful when dealing with limited or unbalanced datasets. A custom CNN model, Iyke-Net, showed promising results when fine-tuned. Interestingly, it was observed that models using grayscale images outperformed those using colour images in disease classification, suggesting that colour information might be less critical than previously thought for certain diagnostic procedures. The study emphasizes the importance of balancing model complexity with computational efficiency and diagnostic accuracy. It advocates for refining existing deep learning models for COPD diagnosis from CXR images, paving the way for further innovations in AI-enhanced medical diagnostics.</p> Victor Ikechukwu Agughasi Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 62 79 10.4114/intartif.vol27iss74pp62-79 Machine Learning-based Extrapolation of Crop Cultivation Cost <p>It is important to comprehend the relation between operational expenses such as labour, seed, irrigation, insecticides, fertilizers and manure costs necessary for the cultivation of crops. A precise cost for the cultivation of crops can offer vital information for agricultural decision-making. The main goal of the study is to compare machine learning (ML) techniques to measure relationships among operational cost characteristics for predicting crop cultivation costs before the start of the growing season using the dataset made available by the Ministry of Agriculture and Farmer Welfare of the Government of India. This paper describes various ML regression techniques, compares various learning algorithms as well as determines the most efficient regression algorithms based on the data set, the number of samples and attributes. The data set used for predicting the cost with 1680 instances includes varying costs for 14 different crops for 12 years (2010-2011 to 2021-2022). Ten different ML algorithms are considered and the crop cultivation cost is predicted. The evaluation results show that Random Forest (RF), Decision Tree (DT), Extended gradient boosting (XR) and K-Neighbours (KN) regression provide better performance in terms of coefficient of determination (R2), root mean square error (RMSE) and mean absolute error (MAE) rate while training and testing time. This study also compares different ML techniques and showed significant differences using the statistical analysis of variance (ANOVA) test. The optimal hyperparameters for the ML models are found using the gridsearchCV and randomizedsearchCV functions, which improves the model's capacity for generalisation.</p> Poonam Bari Lata Ragha Copyright (c) 2024 Iberamia & The Authors 2024-05-17 2024-05-17 27 74 80 101 10.4114/intartif.vol27iss74pp80-101