Aimin Zhou, East China Normal University, Shanghai, China,
Dr. Aimin Zhou is currently a Professor with the Department of Computer Science and Technology, East China Normal University, Shanghai, China. He received the B.Sc. and M.Sc. degrees from Wuhan University, Wuhan, China, in 2001 and 2003, respectively, and the Ph.D. degree from University of Essex, Colchester, U.K., in 2009, all in computer science. His research interests include evolutionary computation and optimization, machine learning, image processing, and their applications. He has published over 50 peer-reviewed papers, and received the best paper award in IES 2014. He is an Associate Editor of the Swarm and Evolutionary Computation, the Complex & Intelligent Systems, and the Swarm Intelligence and Numerical Methods.
Personal Webpage: http://faculty.ecnu.edu.cn/s/1949/main.jspy
For swarm intelligence algorithms, each individual in the swarm represents a solution in the search space, and it also can be seen as a data sample from the search space. Based on the analyses of these data, more effective algorithms and search strategies could be proposed. Brain storm optimization (BSO) algorithm is a new and promising swarm intelligence algorithm, which simulates the human brainstorming process. Through the convergent operation and divergent operation, individuals in BSO are grouped and diverged in the search space/objective space. In this talk, the development history, and the state-of-the-art of the BSO algorithm are reviewed. Every individual in the BSO algorithm is not only a solution to the problem to be optimized, but also a data point to reveal the landscape of the problem. Based on the survey of brain storm optimization algorithms, more analyses could be conducted to understand the function of BSO algorithm and more variants of BSO algorithms could be proposed to solve different problems.
The Brain Storm Optimization (BSO) algorithm is a new kind of swarm intelligence algorithm, which is based on the collective behaviour of human being, that is, the brainstorming process. There are two major operations involved in BSO, i.e., convergent operation and divergent operation. A “good enough” optimum could be obtained through recursive solution divergence and convergence in the search space. The designed optimization algorithm will naturally have the capability of both convergence and divergence.
BSO possess two kinds of functionalities: capability learning and capacity developing. The divergent operation corresponds to the capability learning while the convergent operation corresponds to capacity developing. The capacity developing focuses on moving the algorithm’s search to the area(s) where higher potential solutions may exist while the capability learning focuses on its actual search towards new solution(s) from the current solution for single point based optimization algorithms and from the current population of solutions for population-based swarm intelligence algorithms. The capability learning and capacity developing recycle to move individuals towards better and better solutions. The BSO algorithm, therefore, can also be called as a developmental brain storm optimization algorithm.
The capacity developing is a top-level learning or macro-level learning methodology. The capacity developing describes the learning ability of an algorithm to adaptively change its parameters, structures, and/or its learning potential according to the search states of the problem to be solved. In other words, the capacity developing is the search potential possessed by an algorithm. The capability learning is a bottom-level learning or micro-level learning. The capability learning describes the ability for an algorithm to find better solution(s) from current solution(s) with the learning capacity it possesses.
The BSO algorithm can also be seen as a combination of swarm intelligence and data mining techniques. Every individual in the brain storm optimization algorithm is not only a solution to the problem to be optimized, but also a data point to reveal the landscapes of the problem. The swarm intelligence and data mining techniques can be combined to produce benefits above and beyond what either method could achieve alone.
Yuhui Shi, Southern University of Science and Technology, Shenzhen, China, email@example.com
Yuhui Shi received the PhD degree in electronic engineering from Southeast University, Nanjing, China, in 1992. He is a chair professor in the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. He is a Fellow of the IEEE. His main research interests include the areas of computational intelligence techniques (including swarm intelligence) and their applications. Dr. Shi is the Editor-in-Chief of the International Journal of Swarm Intelligence Research.
personal webpage: http://cse.sustc.edu.cn/cn/people/view/people_id/3/sort_id/9/pid/
Shi Cheng, Shaanxi Normal University, Xi’an, China, firstname.lastname@example.org
Shi Cheng received the Bachelor’s degree in Mechanical and Electrical Engineering from Xiamen University, Xiamen, the Master’s degree in Software Engineering from Beihang University (BUAA), Beijing, China, the Ph.D. degree in Electrical Engineering and Electronics from University of Liverpool, Liverpool, United Kingdom in 2005, 2008, and 2013, respectively. He is currently a lecturer with School of Computer Science, Shaanxi Normal University, China. His current research interests include swarm intelligence, multiobjective optimization, and data mining techniques and their applications.
Evolutionary Search has been shown to be a powerful approach to complex search problems (e.g., NP-hard optimization problems). The big data era has brought new challenging problems, the complexity, requirement, and even available facility of which have dramatically changed in the last decade. This talk will demonstrate three typical research challenges/questions that have been brought to a prominent position in the research of evolutionary computation by the big data era, and introduce our latest efforts to tackle these challenges.
Ke Tang, Southern University of Science and Technology, Guangdong, China
Ke Tang is a Professor at the Department of Computer Science and Engineering, Southern University of Science and Technology (SUSTech). His major research interests include machine learning, evolutionary computation and their applications. He has published more than 130 journal and conference papers. According to Google Scholar, his publications have received more than 6000 citations and the H-index is 35. He is/was an Associate Editor or Editorial Board Member of the IEEE Trans. on Evolutionary Computation, IEEE Computational Intelligence Magazine, Computational Optimization and Applications (Springer), Natural Computing (Springer) and Memetic Computing (Springer) and served as program/technical chairs/co-chairs of 10 international conferences. He received the Royal Society Newton Advanced Fellowship in 2015 and the 2018 IEEE Computational Intelligence Society Outstanding Early Career Award.
More tutorials will be added soon.