IKERBASQUE Research Fellow
University of Deusto, Spain, Deusto Institute of Technology
Antonio D. Masegosa took his University degree in Computer Engineering in 2005 and his PhD in Computer Sciences in 2010, both from the University of Granada, Spain. From June 2010 to November 2014 he was a post-doc researcher at the Research Center for ICT of the University of Granada. In 2014 he received an IKERBASQUE Research Fellowship to work in the Mobility Unit of the Deusto Institute of Technology, in Bilbao, Spain. He has published four books, sixteen JCR papers and more than 20 papers in both international and national conferences. He has supervised one PhD thesis, one MSc Thesis and he is currently supervising one PhD thesis. He has participated in 3 European research projects (TIMON, PostLowCIT and LOGISTAR) as well as 5 national and 4 regional research projects. He is a member of the program committee of international conferences as IEEE CEC, GECCO, ICCCI, ECAL, HM or NICSO. He has served as a reviewer in international journals as European Journal of Operational Research, Fuzzy Sets and Systems, Information Sciences, NeuroComputing, Optimization Letters and Memetic Computing. His main research interests are: Artificial Intelligence, Intelligent Systems, Soft Computing, Hybrid Metaheuristics, Machine Learning, Deep Learning, Intelligent Transportation Systems, Logistic Networks, Travel Demand Estimation, Traffic Forecasting.
University of Deusto, Spain, Deusto Institute of Technology
University of Granada, Spain, Research Center for ICT
University of Granada, Spain, Department of Computer Sciences and Artificial Intelligence
Ph.D. in Computer Science
University of Granada, Spain
Master in Soft Computing and Intelligent Systems
University of Granada
Computer Engineering Degree
University of Granada, Spain
I am a member of the Mobility Unit of the Deusto Institute of Technology.
My research interests are mainly focused on the development of Soft Computing techniques, mainly related to optimization and machine learning, for addressing different problems that arise in the area of mobility of people and goods, with the final objective of achieving a safer and more sustainable mobility.
In a more specific way, the areas that I am currently focused on are:
Juan Sebastian Angarita (Current co-advisor together with Isaac Triguero (University of Nottingham, UK)
Topic: Auto-ML methods for short-term traffic forecasting
University of Deusto, Bilbao, Spain
Alejo Vazquez (Current co-advisor together with Enrique Onieva (University of Deusto))
Topic: Evolutionary algorithms for trajectory data mining
University of Deusto, Bilbao, Spain
Jenny Fajardo (co-advisor together with D. Pelta (University of Granada))
Topic: Soft Computing in Dynamic Optimization Problems
University of Granada, Spain - CUJAE, Havanna, Cuba
Virgilio Cruz (Co-mentored together with D. Pelta)
Title: Intelligent Systems of social and healthcare attention: a tool for the design and validation of deployment of services
University of Granada, Granada, Spain
Federico Rutolo (Co-mentored together with J.L. Verdegay)
Title: Study and development of cooperative strategies for Systems Biology problems
University of Granada, Spain - University of Rome, Italy
You can also find a complete list of my publications in my Google Scholar profile
As the climate and environment continue to fluctuate, researchers are urgently looking for new ways to preserve our limited resources and prevent further environmental degradation. The answer can be found through computer science, a field that is evolving at precisely the time it is needed most.
Soft Computing Applications for Renewable Energy and Energy Efficiency brings together the latest technological research in computational intelligence and fuzzy logic as a way to care for our environment. This reference work highlights current advances and future trends in environmental sustainability using the principles of soft computing, making it an essential resource for students, researchers, engineers, and practitioners in the fields of project engineering and energy science.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation.
This book is a collection of research works presented in the VI International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO) held in Canterbury, UK. Previous editions of NICSO were held in Granada, Spain (2006 & 2010), Acireale, Italy (2007), Tenerife, Spain (2008), and Cluj-Napoca, Romania (2011). NICSO 2013 and this book provides a place where state-of-the-art research, latest ideas and emerging areas of nature inspired cooperative strategies for problem solving are vigorously discussed and exchanged among the scientific community. The breadth and variety of articles in this book report on nature inspired methods and applications such as Swarm Intelligence, Hyper-heuristics, Evolutionary Algorithms, Cellular Automata, Artificial Bee Colony, Dynamic Optimization, Support Vector Machines, Multi-Agent Systems, Ant Clustering, Evolutionary Design Optimisation, Game Theory and other several Cooperation Models.
The evolution of soft computing applications has offered a multitude of methodologies and techniques that are useful in facilitating new ways to address practical and real scenarios in a variety of fields.
Exploring Innovative and Successful Applications of Soft Computing highlights the applications and conclusions associated with soft computing in different technological environments. Providing potential results based on new trends in the development of these services, this book aims to be a reference source for researchers, practitioners, and students interested in the most successful soft computing methods applied to recent problems
This book is a PhD dissertation which focuses on the study, design, development and application of centralised cooperative strategies for optimisation problems. These models consist of a set of parallel cooperating agents, where each agent carries out a search in a solution space sharing information with the rest of the agents. Many studies have shown that this cooperation leads to more efficient and effective strategies. The authors start describing the most known trajectory and population-based metaheuristics and introduce the cooperative strategies. The core of this book is devoted to the following topics: analysis of the composition and the cooperation scheme, hybridization with Reactive Search ideas, application of centralised cooperative strategies to Dynamic Optimisation Problems and resolution of sets of instances by a cooperative method. The book is addressed to students or researchers in the fields of Intelligent Systems, Soft Computing, Optimisation Techniques and, especially, Hybrid Metaheuristics.
Intelligent Transportation Systems emerged to meet the increasing demand for more efficient, reliable and safer transportation systems. These systems combine electronic, communication and information technologies with traffic engineering to respond to the former challenges. The benefits of Intelligent Transportation Systems have been extensively proved in many different facets of transport and Soft Computing has played a major role in achieving these successful results. This book chapter aims at gathering and discussing some of the most relevant and recent advances of the application of Soft Computing in four important areas of Intelligent Transportation Systems as autonomous driving, traffic state prediction, vehicle route planning and vehicular ad hoc networks. © Springer International Publishing AG 2018.
Despite the broad range of Machine Learning (ML) algorithms, there are no clear baselines to find the best method and its configuration given a Short-Term Traffic Forecasting (STTF) problem. In ML, this is known as the Model Selection Problem (MSP). Although Automatic Algorithm Selection (AAS) has proved success dealing with MSP in other areas, it has hardly been explored in STTF. This paper deepens into the benefits of AAS in this field. To this end, we have used Auto-WEKA, a well-known AAS method, and compared it to the general approach (which consists of selecting the best of a set of algorithms) over a multi-class imbalanced classification STTF problem. Experimental results show AAS as a promising methodology in this area and allow important conclusions to be drawn on how to improve the performance of ASS methods when dealing with STTF. © 2018, Springer Nature Switzerland AG.
Vehicular Ad-Hoc Networks (VANETs) have attracted a high interest in recent years due to the huge number of innovative applications that they can enable. Some of these applications can have a high impact on reducing Greenhouse Gas emissions produced by vehicles, especially those related to traffic management and driver assistance. Many of these services require disseminating information from a central server to a set of vehicles located in a particular region. This task presents important challenges in VANETs, especially when it is made at big scale. In this work, we present a new approach for information dissemination in VANETs where the structure of the communications is configured using a model based on Covering Location Problems that it is optimized by means of a Genetic Algorithm. The results obtained over a realistic scenario show that the new approach can provide good solutions for very demanding response times and that obtains competitive results with respect to reference algorithms proposed in literature. © 2018, Springer International Publishing AG.
An open question that arises in the design of adaptive schemes for Dynamic Optimization Problems consists on deciding what to do with the knowledge acquired once a change in the environment is detected: forget it or use it in subsequent changes? In this work, the knowledge is associated with the selection probability of two local search operators in the Adaptive Hill Climbing Memetic Algorithm. When a problem change is detected, those probability values can be restarted or maintained. The experiments performed over five binary coded problems (for a total of 140 different scenarios) clearly show that keeping the information is better than forgetting it.
One of the methodologies more used to accomplish prospective analysis is the scenario method. The first stage of this method is the so called structural analysis and aims to determine the most important variables of a system. Despite being widely used, structural analysis still presents some shortcomings, mainly due to the vagueness of the information used in this process. In this sense, the application of Soft Computing to structural analysis can contribute to reduce the impact of these problems by providing more interpretable and robust models. With this in mind, we present a methodology for structural analysis based on computing with words techniques to properly address vagueness and increase the interpretability. The method has been applied to a real problem with encouraging results.
In this work we discuss to what extent and in what contexts the use of knowledge discovery techniques can improve the performance of cooperative strategies for optimization. The study is approached over two different cases study that differs in terms of the definition of the initial cooperative strategy, the problem chosen as test bed (Uncapacitated Single Allocation p HubMedian and knapsack problems) and the number of instances available for applying data mining. The results obtained show that this techniques can lead to an improvement of the cooperatives strategies as long as the application context fulfils certain characteristics.
Cooperative strategies are search techniques composed by a set of individual methods (solvers) that, through information exchange, cooperate to solve an optimization problem. In this paper, we focus on the composition of such set and we analyze the results of a cooperative strategy when the solvers in the set are equal (homogeneous) or different (heterogeneous). Using the Uncapacitated Single Allocation p-Hub Median Problem as test bed we found that taking copies of the same solver and adding cooperation, the results are better than using an isolated solver. Regarding using different solvers, the cooperative heterogeneous scheme is usually better than the best performing isolated solver search (which usually changes in terms of the instance being solved). In terms of heterogeneous vs. homogeneous composition of the cooperative strategy, an advantage in using the former scheme can be observed.
Cooperative strategies and reactive search are very promising techniques for solving hard optimization problems, since they reduce human intervention required to set up a method when the resolution of an unknown instance is needed. However, as far as we know, a hybrid between both techniques has not yet been proposed in the literature. In this work, we show how reactive search principles can be incorporated into a simple rule-driven centralised cooperative strategy. The proposed method has been tested on the Uncapacitated Single Allocation p-Hub Median Problem, obtaining promising results.
Most of the adaptive metaheuristics face the resolution of an instance from scratch, without considering previous runs. Basing on the idea that the computa- tional effort done and knowledge gained when solving an instance should be use to solve similar ones, we present a new metaheuristic strategy that permits the simul- taneous solution of a group of instances. The strategy is based on a set of adaptive operators that works on several sets of solutions belonging to different problem in- stances. The method has been tested on MAX-SAT with sets of various instances obtaining promising results.
One of the most challenging issues when facing a classification problem is to deal with imbalanced datasets. Recently, ensemble classification techniques have proven to be very successful in addressing this problem. We present an ensemble classification approach based on feature space partitioning for imbalanced classification. A hybrid metaheuristic called GACE is used to optimize the different parameters related to the feature space partitioning. To assess the performance of the proposal, an extensive experimentation over imbalanced and real-world datasets compares different configurations and base classifiers. Its performance is competitive with that of reference techniques in the literature.
Researchers who investigate in any area related to computational algorithms (both defining new algorithms or improving existing ones) usually find large difficulties to test their work. Comparisons among different researches in this field are often a hard task, due to the ambiguity or lack of detail in the presentation of the work and its results. On many occasions, the replication of the work conducted by other researchers is required, which leads to a waste of time and a delay in the research advances. The authors of this study propose a procedure to introduce new techniques and their results in the field of routing problems. In this paper, this procedure is detailed, and a set of good practices to follow are deeply described. It is noteworthy that this procedure can be applied to any combinatorial optimization problem. Anyway, the literature of this study is focused on routing problems. This field has been chosen because of its importance in real world, and its relevance in the actual literature. © 2017 Elsevier B.V.
A real-world newspaper distribution problem with recycling policy is tackled in this work. To meet all the complex restrictions contained in such a problem, it has been modeled as a rich vehicle routing problem, which can be more specifically considered as an asymmetric and clustered vehicle routing problem with simultaneous pickup and deliveries, variable costs and forbidden paths (AC-VRP-SPDVCFP). This is the first study of such a problem in the literature. For this reason, a benchmark composed by 15 instances has been also proposed. In the design of this benchmark, real geographical positions have been used, located in the province of Bizkaia, Spain. For the proper treatment of this AC-VRP-SPDVCFP, a discrete firefly algorithm (DFA) has been developed. This application is the first application of the firefly algorithm to any rich vehicle routing problem. To prove that the proposed DFA is a promising technique, its performance has been compared with two other well-known techniques: an evolutionary algorithm and an evolutionary simulated annealing. Our results have shown that the DFA has outperformed these two classic meta-heuristics. © 2016, Springer-Verlag Berlin Heidelberg.
The location of facilities (antennas, ambulances, police patrols, etc) has been widely studied in the literature. The maximal covering location problem aims at locating the facilities in such positions that maximizes certain notion of coverage. In the dynamic or multi-period version of the problem, it is assumed that the nodes’ demand changes with the time and as a consequence, facilities can be opened or closed among the periods. In this contribution we propose to solve dynamic maximal covering location problem using an algorithm portfolio that includes adaptation, cooperation and learning. The portfolio is composed of an evolutionary strategy and three different simulated annealing methods (that were recently used to solve the problem). Experiments were conducted on 45 test instances (considering up to 2500 nodes and 200 potential facility locations). The results clearly show that the performance of the portfolio is significantly better than its constituent algorithms. © 2016, Springer-Verlag Berlin Heidelberg.
When we face an optimization problem whose definition (in some aspect) changes over the time, we are in the presence of a Dynamic Optimization Problem (DOP). The aspects that can change are the objective function, the variables' domain, the appearance/disappearance of variables or constraints, etc. This paper aims at providing a first introduction to those who are interested in the topic. Concretely, we present the DOPs, the most common performance measures as well as the methods used to solve them. We also briey describe some of the most recent reviews and comment some current challenges and research opportunities in DOPs.
Metaheuristics have proven to get a good performance solving difficult optimization problems in practice. Despite its success, metaheuristics still suffers from several problems that remains open as the variability of their performance depending on the problem or instance being solved. One of the approaches to deal with these problems is the hybridization of techniques. This paper presents a hybrid metaheuristic that combines a Genetic Algorithm (GA) with a Cross Entropy (CE) method to solve continuous optimization functions. The algorithm divides the population into two sub-populations, in order to apply GA in one sub-population and CE in the other. The proposed method is tested on 24 continuous benchmark functions, with four different dimension configurations. First, a study to find the best parameter configuration is done. The best configuration found is compared with several algorithms in the literature in order to demonstrate the competitiveness of the proposal. The results shows that GACE is the best performing method for instances with high dimensionality. Statistical tests have been applied, to support the conclusions obtained in the experimentation. © 2016 Elsevier Ltd. All rights reserved.
In this paper, the literature associated with the covering location problems addressing uncertainty under a fuzzy approach is reviewed. Specifically, the papers related to the most commonly applied models such as set covering location problem, maximal covering location problem, and hub covering location problem are examined. An annotated bibliography is presented in which such papers have been classified according to the following criteria: the fuzzy items considered in the proposed model, the type of problem addressed, the fuzzy approach applied, the method of resolution, and field of application considered. This research provides useful information that helps to identify some opportunities for the application of fuzzy approaches to the covering location problems. © 2016 World Scientific Publishing Company.
This paper presents a method of optimizing the elements of a hierarchy of fuzzy-rule-based systems (FRBSs). It is a hybridization of a genetic algorithm (GA) and the cross-entropy (CE) method, which is here called GACE. It is used to predict congestion in a 9-km-long stretch of the I5 freeway in California, with time horizons of 5, 15, and 30 min. A comparative study of different levels of hybridization in GACE is made. These range from a pure GA to a pure CE, passing through different weights for each of the combined techniques. The results prove that GACE is more accurate than GA or CE alone for predicting short-term traffic congestion. © 2015 IEEE.
Dynamic Optimization Problems (DOPs) have attracted a growing interest in recent years. This interest is mainly due to two reasons: their closeness to practical real conditions and their high complexity. The majority of the approaches proposed so far to solve DOPs are population-based methods, because it is usually believed that their higher diversity allows a better detection and tracking of changes. However, recent studies have shown that trajectory-based methods can also provide competitive results. This work is focused on this last type of algorithms. Concretely, it proposes a new adaptive local search for continuous DOPs that incorporates a memory archive. The main novelties of the proposal are two-fold: the prioritized tracking, a method to determine which solutions in the memory archive should be tracked first; and an adaptive mechanism to control the minimum step-length or precision of the search. The experimentation done over the Moving Peaks Problem (MPB) shows the benefits of the prioritized tracking and the adaptive precision mechanism. Furthermore, our proposal obtains competitive results with respect to state-of-the-art algorithms for the MPB, both in terms of performance and tracking ability.
Since their first appearance in 1997 in the prestigious journal Science, algorithm portfolios have become a popular approach to solve static problems. Nevertheless and despite that success, they have not received much attention in Dynamic Optimization Problems (DOPs). In this work, we aim at showing these methods as a powerful tool to solve combinatorial DOPs. To this end, we propose a new algorithm portfolio for this type of problems that incorporates a learning scheme to select, among the metaheuristics that compose it, the most appropriate solver or solvers for each problem, configuration and search stage. This method was tested over 5 binary-coded problems (dynamic variants of OneMax, Plateau, RoyalRoad, Deceptive and Knapsack) and compared versus two reference algorithms for these problems (Adaptive Hill Climbing Memetic Algorithm and Self Organized Random Immigrants Genetic Algorithm). The results showed the importance of a good design of the learning scheme, the superiority of the algorithm portfolio against the isolated version of the metaheuristics that integrate it, and the competitiveness of its performance versus the reference algorithms. © 2015, the authors.
The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user’s context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system. © 2015 by the authors; licensee MDPI, Basel, Switzerland.
The best performing methods for Dynamic Optimization Problems (DOPs) are usually based on a set of agents that can have different complexity (like solutions in Evolutionary Algorithms, particles in Particle Swarm Optimization, or metaheuristics in hybrid cooperative strategies). While methods based on low-complexity agents are widely applied in DOPs, the use of more "intelligent" agents has rarely been explored. This work focuses on this topic and more specifically on the use of cooperative strategies composed by trajectory-based search agents for DOPs. Within this context, we analyze the influence of the number of agents (cardinality) and their neighborhood sampling strategy on the performance of these methods. Using a low number of agents with distinct neighborhood sampling strategies shows the best results. This method is then compared versus state-of-the-art algorithms using as test bed the well-known Moving Peaks Benchmark and dynamic versions of the Ackley's, Griewank's and Rastrigin's functions. The results show that this configuration of the cooperative strategy is competitive with respect to the state-of-the-art methods. © 2013 Elsevier B.V. All rights reserved.
Scenario Planning helps explore how the possible futures may look like and establishing plans to deal with them, something essential for any company, institution or country that wants to be competitive in this globalize world. In this context, Cross Impact Analysis is one of the most used methods to study the possible futures or scenarios by identifying the system's variables and the role they play in it. In this paper, we focus on the method called MICMAC (Impact Matrix Cross-Reference Multiplication Applied to a Classification), for which we propose a new version based on Computing with Words techniques and fuzzy sets, namely Fuzzy Linguistic MICMAC (FLMICMAC). The new method allows linguistic assessment of the mutual influence between variables, captures and handles the vagueness of these assessments, expresses the results linguistically, provides information in absolute terms and incorporates two new ways to visualize the results. Our proposal has been applied to a real case study and the results have been compared to the original MICMAC, showing the superiority of FLMICMAC as it gives more robust, accurate, complete and easier to interpret information, which can be very useful for a better understanding of the system. © 2014 Elsevier B.V.
The necessity of developing high-performance resolution methods for continuous optimisation problems has given rise to the emergence of cooperative strategies which combine different self-contained metaheuristics that exchange information among them. However, the majority of the proposals found in the literature make use of population-based algorithms and/or employ a cooperation scheme with a pipeline or decentralised information flow. In this work we proposed a centralised cooperative strategy, where a set of trajectory-based methods are controlled by a rule-driven coordinator. In this context, we also present a new analysis that allows to study the behaviour induced by a determined type of cooperation in the strategy. A comprehensive experimentation has been accomplished over CEC2005 and CEC2008 benchmarks in order to assess the performance of the method with different cooperation schemes. The results show that these cooperation schemes, apart from having a different performance, lead the strategy to distinct exploration and exploitation patterns. In addition, the proposed method presents competitive results with respect to state-of-the-art algorithms for both benchmarks. © 2012 Elsevier Inc. All rights reserved.
This work presents a study on the performance of several algorithms on different continuous dynamic optimization problems. Eight algorithms have been used: SORIGA (an Evolutionary Algorithm), an agents-based algorithm, the mQSO (a widely used multi-population PSO) as well as three heuristic-rule-based variations of it, and two trajectory-based cooperative strategies. The algorithms have been tested on the Moving Peaks Benchmark and the dynamic version of the Ackley, Griewank and Rastrigin functions. For each problem, a wide variety of configuration variations have been used, emphasizing the influence of dynamism, and using a full-factorial experimental design. The results give an interesting overview of the properties of the algorithms and their applicability, and provide useful hints to face new problems of this type with the best algorithmic approach. Additionally, a recently introduced methodology for comparing a high number of experimental results in a graphical way is used. © 2012 Elsevier B.V.
Having in mind the idea that the computational effort and knowledge gained while solving a problem's instance should be used to solve other ones, we present a new strategy that allows to take advantage of both aspects. The strategy is based on a set of operators and a basic learning process that is fed up with the information obtained while solving several instances. The output of the learning process is an adjustment of the operators. The instances can be managed sequentially or simultaneously by the strategy, thus varying the information available for the learning process. The method has been tested on different SAT instance classes and the results confirm that (a) the usefulness of the learning process and (b) that embedding problem specific algorithms into our strategy, instances can be solved faster than applying these algorithms instance by instance. © 2010 Springer-Verlag.
Optimization in dynamic environments is a very active and important area which tackles problems that change with time (as most real-world problems do). In this paper we present a new centralized cooperative strategy based on trajectory methods (tabu search) for solving Dynamic Optimization Problems (DOPs). Two additional methods are included for comparison purposes. The first method is a Particle Swarm Optimization variant with multiple swarms and different types of particles where there exists an implicit cooperation within each swarm and competition among different swarms. The second method is an explicit decentralized cooperation scheme where multiple agents cooperate to improve a grid of solutions. The main goals are: firstly, to assess the possibilities of trajectory methods in the context of DOPs, where populational methods have traditionally been the recommended option; and secondly, to draw attention on explicitly including cooperation schemes in methods for DOPs. The results show how the proposed strategy can consistently outperform the results of the two other methods. © 2010 Springer-Verlag.
Optimization problems are ubiquitous in our daily lives and one way to cope with them is using cooperative optimization systems that allow to obtain good enough, fast enough, and cheap enough solutions. From a practical point of view, the design and the analysis of such systems are complex tasks. In this work, an integrated system (DACOS) for helping in the design and analysis of cooperative, centralized optimization systems is presented. Also, the methodology used for the creation of DACOS (mainly, the use of software modeling) is described in detail. This may also be useful for researchers who want to build up their own system for their particular needs. DACOS has been developed using the Eclipse developing framework, which, among other advantages, is also able to automatically generate source code. Finally, a practical case of use is presented: the application of DACOS to the configuration and analysis of a cooperative strategy on a location problem. Copyright © 2010 John Wiley & Sons, Ltd.
Optimization-based decision support systems (DSSs) are an interesting and important area among the many classes of decision support systems. This paper presents SiGMA, a generic core to build Optimization-based DSSs that tries to be as generic as possible on the on-line addition and use of solvers while preserving the maximum functionality on the Analysis stage that this criterion allows. SiGMA serves as a framework to build more complex DSSs where problem specific knowledge can be used to improve the functionality available at the Formulation and Analysis stages. Two application examples from different domains are also presented: SiGMAPhub and SiGMAProt. These applications include additional analysis capabilities for the p-hub and the protein structure comparison problems, respectively. © 2008 Elsevier Ltd. All rights reserved.
Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. Among the different kind of classification problems that have been studied in the literature, the imbalanced ones, particularly those that represents real-world problems, have attracted the interest of many researchers in recent years. In order to face this problems, different approaches have been used or proposed in the literature, between then, soft computing and ensemble techniques. In this work, ensembles and fuzzy techniques have been applied to real-world traffic datasets in order to study their performance in imbalanced real-world scenarios. KEEL platform is used to carried out this study. The results show that different ensemble techniques obtain the best results in the proposed datasets. © 2018, Springer International Publishing AG, part of Springer Nature.
Location based services are limited due to the weakness of the Global Navigation Satellite Systems (GNSS) to work properly in indoor and dense urban environments. Among the different technologies proposed to provide indoor positioning, inertial sensors based pedestrian dead-reckoning (PDR) is one of the more cost-effective solutions thanks to no additional infrastructure is needed to be installed, but its performance is time-limited by drift problems. Regarding the heading drift, some heuristics make use of the buildings' dominant directions in order to reduce this problem. In this paper, the method known as improved heuristic drift elimination (iHDE) is enhanced to be implemented in a Step-and-Heading (SHS) based PDR system, that allows to place the inertial sensors in almost any location of the user's body. Particularly, wrist-worn sensors will be used. Tests on synthetically generated and real data show that the iHDE method can be used in a SHS-based PDR without losing its heading drift reduction capability. © 2016 IEEE.
Adaptation to changes in real scenarios is not a 'cost-free' operation. However, in general, this is not considered in most of the studies done in dynamic optimization problems. Our focus here is to analyse what happens when a relocation cost is added in the dynamic maximal covering location problem, that is, when the adaptation to changes entails some cost. Comparing two models with and without 'cost-free' adaptation, respectively, we study how much coverage is lost, how similar are the solutions obtained for both problems along the time and we preliminarily explore the relation between solution similarity and coverage differences. © 2016 IEEE.
Location based services can improve the quality of patient care and increase the efficiency of the healthcare systems. Among the different technologies that provide indoor positioning, inertial sensors based pedestrian dead-reckoning (PDR) is one of the more cost-effective solutions, but its performance is limited by drift problems. Regarding the heading drift, some heuristics make use of the building's dominant directions in order to reduce this problem. In this paper, we enhance the method known as improved heuristic drift elimination (iHDE) to be implemented in a Step-and-Heading (SHS) based PDR system, that allows to place the inertial sensors in almost any location of the user's body. Particularly, wrist-worn sensors will be used. Tests on synthetically generated and real data show that the iHDE method can be used in a SHS-based PDR without losing its heading drift reduction capability. © 2016 IEEE.
TIMON is an EU research project under the programme Horizon 2020 that aims at creating a cooperative ecosystem integrating traffic information, transport management, ubiquitous data and system self-management. The objective of TIMON is to provide real-time services through a web based platform and a mobile APP for drivers, Vulnerable Road Users (VRUs) and bussinesses. These services will contribute to increasing drivers and VRUs assistance. In this project, many of the services mentioned before are supported by a traffic state prediction system. For this reason, the objective of this study is to lay the groundwork for developing an efficient prediction tool. In this work, a preliminary study is shown, comparing the performance of three different evolutionary methods. © 2016 Copyright held by the owner/author(s).
In this paper, a comparative study between a hybrid technique that combines a Genetic Algorithm with a Cross Entropy method to optimize Fuzzy Rule-Based Systems, and literature techniques is presented. These techniques are applied to traffic congestion datasets in order to determine their performance in this area. Different types of datasets have been chosen. The used time horizons are 5, 15 and 30min. Results show that the hybrid technique improves those results obtained by the techniques of the state of the art. In this way, the performed experimentation shows the competitiveness of the proposal in this area of application. © Springer International Publishing Switzerland 2016.
Accurate estimation of the future state of the traffic is an attracting area for researchers in the field of Intelligent Transportation Systems (ITS). This kind of predictions can lead to traffic managers and drivers to act in consequence, reducing the economic and social impact of a possible congestion. Due to the inter-urban traffic information nature, the task of predicting the future state of the traffic requires, in most cases, a non-linear patterns search in the input data. In recent years, a wide variety of models has been used to solve this problem in the most accurate way. Due to that, models generated to provide information about the future state of the road are, usually, incomprehensible to a human operator, making impossible to give him/her an explanation about the causes of the prediction. Given the capacity of rule based systems to explain the reasoning followed to classify a new pattern, the advantages and disadvantages of such approaches are explored in this work. To conduct such task, datasets recorded from the California Department of Transportation are created. A 9-kilometer section of the I5 highway of Sacramento is used for this research. Two different types of datasets are built for the experimentation. One of them contains the entire information recorded. The other one contains with a simplified version of the information, considering only the first, middle and last monitored points of the road. Twelve prediction horizons, from 5 to 60 minutes, were considered for prediction. An experimental comparative study involving 16 state of the art techniques is performed. Techniques tested include those that fall within the categories of Evolutionary Crisp Rule Learning (ECRL) and Evolutionary Fuzzy Rule Learning (EFRL). These methods were selected since they offer to the final user, not only a prediction, but also a legible model about the way in which the decision was taken. Techniques are compared in terms of accuracy and complexity of the models generated. © 2016 The Authors.
This paper studies how the accuracy of the step detection algorithm of a pedestrian dead-reckoning (PDR) system is affected by the sampling frequency and the filtering of the data gathered from a wrist-worn inertial measurement unit (IMU). On the one hand, results show that sensors sampling rate can be reduced and a similar accuracy can still be obtained, what it is very interesting for energy saving purposes. However, a low sampling frequency requires a finer tuning of the algorithm's parameters. On the other hand, the application of a filter to the data gathered from the sensors is always recommended, in order to get some performance improvement. Different types of filters can be used in function of the value of the sampling frequency. © 2015 IEEE.
In this paper, a metaheuristic that combines a Genetic Algorithm and a Cross Entropy Algorithm is presented. The aim of this work is to achieve a synergy between the capabilities of the algorithms using different population sizes in order to obtain the closest value to the optimal of the function. The proposal is applied to 12 benchmark functions with different characteristics, using different configurations.
La gestión de la incertidumbre en los problemas de localización de cobertura máxima (PLCM) es muy importante debido a la naturaleza imprecisa de algunos elementos de estos problemas en el mundo real. La demanda generada en los nodos, la distancia entre nodos, la disponibilidad del servicio y el radio de cobertura son los parámetros que comúnmente se pueden considerar inciertos. Los enfoques probabilísticos y los enfoques difusos han sido los más utilizados para modelar la incertidumbre en el PLCM, pero los de corte difuso no están estructurados o sistematizados. Por eso en este artículo se presenta una revisión de los trabajos que han abordado el PLCM con incertidumbre de tipo difuso, como paso previo para el planteamiento sistemático de modelos y soluciones para el mismo.
Una pregunta que surge en el diseño de esquemas de aprendizaje para Problemas Dinámicos de Optimización consiste en decidir qué hacer con el conocimiento que se ha adquirido una vez que se produce un cambio: olvidarlo o utilizarlo en los cambios posteriores. En este trabajo, intentamos arrojar a la luz sobre este asunto usando el método Adaptive Hill Climbing Memetic Algorithm y el problema de la Mochila como banco de pruebas. Los resultados obtenidos muestran que la respuesta a la pregunta planteada depende de la estructura de la instancia y justifica la necesidad de realizar estudios más completos.
En el mundo moderno, las grandes empresas necesitan predecir los cambios en el mercado y anticiparse a ellos tomando las decisiones más adecuadas en el presente. Entre las herramientas especializadas para ello se encuentra la Planificación de Escenarios. Una de las técnicas usadas en Planificación de Escenarios es el Análisis Morfológico, cuya finalidad es la generación sistemática y evaluación de todas las posibles combinaciones de valores que las variables objeto de estudio pueden tomar, descartando aquellas incompatibles en la práctica. Cada combinación representa un posible escenario futuro, es decir, una situación plausible que puede llegar a darse. La variante conocida como MORPHOL evalúa un escenario mediante la probabilidad de que ocurra finalmente. Aquí se revisa una mejora publicada recientemente que considera otros criterios distintos a la probabilidad para evaluar un escenario usando valores lingüísticos. Además, se proponen mejoras futuras y nuevas líneas que están actualmente siendo investigadas, como clustering de los escenarios generados y obtención de descripciones lingüísticas de cada escenario, con ejemplos sobre un caso de estudio real.
Companies that want to be competitive must make use of good practices to anticipate the future by analyzing the possible effects of today's decisions on their own long-term development. Scenario planning is among the most extended approaches to accomplish this. One of the techniques often used in scenario planning is Morphological Analysis, which aims to explore the space of feasible futures in a systematic way by analyzing all the combinations of the possible states of the variable that compose the system under study. Each of this combinations represent a possible future scenario. This work focuses on a particular variant known as the MORPHOL method, in which every scenario is evaluated in terms of the probability it eventually arises. This is computed using the marginal probability estimates of the hypothetical variables' states involved in the scenario, which are given by human experts. This method presents two drawbacks: first, the probabilities have to be expressed by numerical values which makes difficult its estimation by humans and does not capture its uncertainty; and second, it examines the scenarios basing only on their probability, thus it may ignore scenarios that are interesting but not the very probable. In order to ease the experts' task and capture their opinions in a better way, we introduce Computing with Words techniques. For solving the second shortcoming, we apply Multi-criteria Decision Making to uncover good scenarios according to several criteria jointly including probability. The result is a novel linguistic multi-criteria method for morphological analysis that has been successfully applied to a real problem and thus deserves further research.
La prospectiva tecnológica puede definirse como un conjunto de estudios que se llevan a cabo con el fin de anticipar cuál va a ser el futuro en una determinada área. Una metodología ampliamente utilizada para ello es el Método de los Escenarios de Godet, que incluye un módulo para realizar el conocido como análisis estructural (MIC-MAC). Su finalidad es determinar las variables más importantes en el sistema que se va a estudiar, tomando como entradas las relaciones de influencia existentes entre ellas, las cuales vienen cuantificadas numéricamente por un grupo de expertos en forma de matriz. Proponemos una extensión difusa de este método que permite a los expertos juzgar las relaciones mediante etiquetas lingüísticas y obtener resultados más fácilmente intepretables. Esto es una importante ventaja sobre el método tradicional, ya que en este tipo de estudios, la interpretabilidad de las salidas y los resultados intermedios es un aspecto primordial. Se ha realizado una implementación en Java de nuestro procedimiento difuso con una interfaz web accesible públicamente, y que demuestra la facilidad de uso del enfoque planteado. Los primeros resultados son interesantes desde el punto de vista práctico.
Developing predictive models is one of the key issues in Systems Biology. A critical problems that arises when these models are built is the parameter estimation. The calibration of these nonlinear dynamic models is stated as a nonlinear programming problems (NLP) and its resolution is usually complex due to the frequent ill-conditioning and multimodality of the majority of these problems. For that reason, the use of hybrid stochastic optimization methods has received an increasing interest in recent years. In this work we present a new hybrid method for parameter estimation in Systems Biology. This proposal consists on a set of DE algorithms that cooperate among them through a centralised scheme in which a coordinator controls their behavior by means of a rule system. The comparison with state-of-the-art methods shows the better performance of this cooperative strategy when the complexity of the instances is increased.
Technology foresight deals with the necessity of anticipating the future to better adapt to new situations regarding innovations that directly affect business world. One widely spread methodology in technology foresight is Godet's Scenario Method, which includes a module (MICMAC) performing the so-called structural analysis. The goal of the structural analysis is to identify the most important variables in a system. To this end, it makes use of an influence matrix that describes the relations between the variables. This information is usually given by experts based on their own knowledge and experience. However, some of the information of the influence matrix may contain errors due to the subjective nature of the criteria and opinions of the experts. Here we propose a new analysis that follows a multi-objective approach and allows to measure the sensibility of the model versus possible errors at the input. The well-known NSGA-II algorithm has been used as a solver. The results are encouraging and deserve further investigation.
Dentro de los distintos tipos de metaheurísticas híbridas, las estrategias cooperativas se presentan como una de las alternativas más prometedoras. Este tipo de métodos consisten en un conjunto de algoritmos que exploran el espacio de búsqueda de forma simultánea mientras intercambian información entre ellos. Con el objetivo de profundizar en el estudio de dichos métodos y utilizando el problema del p-hub mediano, en este trabajo analizamos como afecta al comportamiento de la estrategia cooperativa el hecho de cambiar de forma notoria la configuración de las metaheurísticas que la componen.
Optimisation in dynamic environments is a very active and important area which tackles problems that change with time (as most real-world problems do). The possibility to use a new centralised cooperative strategy based on trajectory methods (tabu search) for solving Dynamic Optimisation Problems (DOPs) was previously introduced showing good results against state of the art methods like the Particle Swarm Optimisation (PSO) variant with multiple swarms and different types of particles. The analysis of this previous work are further extended here by exploring more possibilities for the cooperation rules used in the strategy. The results show that different classes of cooperation can lead to quite different results, some of them greatly outperforming the previous ones.
Metaheuristics are excellent tools for addressing hard combinatorial optimization problems. Given a limited quantity of time and space resources, heuristic algo- rithms are very effective in providing good quality solutions. However, among the open problems in this research field, we want to the outline following ones:
The first issue can be handled by cooperative strategies , where a set of potentially good heuristics for the optimization problem are executed in parallel, sharing information during the run. The second problem is successfully addressed by Reactive strategies , and the use of sub-symbolic machine learning to automate the parameter tuning process, making it an integral part of the algorithm. In , a centralized cooperative strategy is presented where a coordinator controls a set of heuristics by means of a Fuzzy rule base. The aim of this paper is to investigate the effectiveness of a coordinator driven by Reactive rules. An empirical comparison between both rules is provided using the Uncapacitated Single Allocation p-Hub Median Problem (USApHMP)  as example.
La mayoría de las metaheurísticas con mecanismos de auto-adaptación que se han propuesto en la literatura, afrontan cada resolución desde el principio, sin tener en cuenta lo aprendido durante otras búsquedas. Basándonos en la idea de que el esfuerzo computacional realizado para la resolución de una instancia debería ser aprovechado en la solución de instancias similares, presentamos un nuevo modelo que permite la solución simultánea de un grupo de instancias. Este modelo contiene un conjunto de agentes que operan sobre un espacio de búsqueda. Presenta un esquema multi-nivel de 2 capas que controla jerárquicamente la autoadaptación de los parámetros de los operadores. Este nuevo método ha sido aplicado al problema MAX-SAT con conjuntos de varias instancias obteniéndose unos resultados prometedores.
This intelligent system was also developed in the framework of the H2020 project TIMON (Grant agreement 636220). The aim of the system was to provide point-to-point routes for various means of transport: bike, motorbike, car and public transport. Furthermore, for some of these means of transport, the system allowed different type of routes. Concretely, for bikes, the system offered the safest, the flattest and the quickest routes; for motorbikes, the safest, the greenest and the quickest routes; and for cars, the greenest and the quickest routes. Apart from this, the system also presented two main novelties: 1) a functionality based on Evolutionary Algorithms that provided alternative routes to users by exploiting their usual flexibility in their preferences (e.g. a route that is only 3 minutes longer than the preferred one, but a 30% safer); and 2) the incorporation of different type of events (e.g. traffic accidents, road works, traffic jams, etc.) and the traffic information generated by the previous system in the calculation of the routes. The system developed was also a stand-alone software module with standardized inputs (i.e. DATEX II compliant data sources, GTFS, RT-GTFS, etc.), whereas output information was designed to contain the general information of the route (e.g. distance, travel time, geometry, etc.) and the turn by turn navigation indications. As in the previous system, I was in charge of the design and architecture of the system as well as the supervision of its implementation, deployment, integration and testing during the project.
This intelligent system was developed in the framework of the H2020 project TIMON (Grant agreement 636220). The objective of this system was two-fold: on the one hand, to provide real-time traffic information (flow, average speed and traffic state) at a road link level by fusing information from different traffic data sources as loop sensors, Bluetooth sensors and floating data coming from vehicles; and on the other hand, to provide traffic information, also at a road link level, in future time horizons (15, 30, 45 and 60 minutes) by fusing data coming from traffic sensors and contextual information (weather and calendar) and by using models based on Fuzzy Rule-Based Systems. The system developed is a stand-alone software module with standardized inputs and outputs according to DATEX-II guidelines. I was in charge of the design and architecture of the system as well as the supervision of its implementation, deployment, integration and testing during the project.
This intelligent system was developed in collaboration with a local company called Gobile, as a part of the joint innovation project i-APUS that aimed at implementing a software platform for police forces (subject to a "non-disclosure agreement"). Its function was the design of patrol areas for police forces in urban regions. Given an incident prediction map (provided by other software module of the i-APUS project) and the available resources (number of patrols, vehicle type, etc.) the intelligent system defines the patrol areas that maximize the expected percentage of incidents attended before a predefined time threshold. The development entailed the next aspects: modelling of the problem elements (patrols, incidents, locations, patrol areas, etc.); definition of the optimization model as a Maximum Expected Covering Location Problem; design of a metaheuristic to solve the underlying maximization problem. I was in charge of the whole design and implementation in Java of this intelligent system.
This intelligent system was developed in collaboration with a local company called Gobile, as a part of the joint innovation project i-APUS that aimed at implementing a software platform for police forces (subject to a "non-disclosure agreement"). The objective of this intelligent system for scheduling and rostering of police forces consisted on finding the assignment of polices to shifts that better fitted a predefined set of requirements (numbers of polices needed, skills, etc.) and constraints (maximum number of working hours, minimum resting time after a night shift, etc.). The development entailed the next aspects: modelling of the requirements and constraints by an XML schema; definition of the search model as a constraint satisfaction problem; and the design of a metaheuristic to solve the underlying optimization problem. I was in charge of the whole design and implementation in Java of this intelligent system.
Result of Greater Transcendence and Scientific Originality of the Ministry of Higher Education of Cuba, 2016. For "New methods and applications of computational intelligence techniques". Resolution No. 021 of April 11, 2016, Ministry of Higher Education of Cuba. Role: Participation as a secondary author. Prize resolution: link (In Spanish).
Coordinator of the team Deusto, ranked 3rd in the Transportation Forecasting Competition (TRANSFOR 19) at the 2019 TRB Annual Meeting Workshop. From a total of 71 registered teams, 31 submitted their models and results within the deadline. We were included among 5 finalists among the 31 submitted model (see next link). In the final stage we were upgraded with the 3rd position for having the best scoring in novelty and presentation (see next link).
Ph.D. Programme on Engineering for the Information Society and Sustainable Development, University of Deusto, 6 hours.
Ph.D. Programme on Engineering for the Information Society and Sustainable Development, University of Deusto, 3 hours.
Degree in Information and Communication. 1st year. University of Granada. 30 Hours.
Degree in Computer Engineering. 1st year. University of Granada. 32 Hours..
I would be happy to talk to you if you need my assistance in your research or your bussiness. I would be also glad to discuss with you any issue related to my research or provide you further material (source code of the software, manuscript, etc.)
You can find me at my office located at the Deusto Institute of Technology (DeustoTech), University of Deusto. Please, contact me to fix an appointment
The full address is:Deusto Institute of Technology