Galaxy merger challenge: A comparison study between machine learning-based detection methods

Margalef-Bentabol, B.; Wang, L.; La Marca, A.; Blanco-Prieto, C.; Chudy, D.; Domínguez-Sánchez, H.; Goulding, A. D.; Guzmán-Ortega, A.; Huertas-Company, M.; Martin, G.; Pearson, W. J.; Rodriguez-Gomez, V.; Walmsley, M.; Bickley, R. W.; Bottrell, C.; Conselice, C.; O'Ryan, D.
Referencia bibliográfica

Astronomy and Astrophysics

Fecha de publicación:
7
2024
Número de autores
17
Número de autores del IAC
1
Número de citas
0
Número de citas referidas
0
Descripción

Aims: Various galaxy merger detection methods have been applied to diverse datasets. However, it is difficult to understand how they compare. Our aim is to benchmark the relative performance of merger detection methods based on machine learning (ML).
Methods: We explore six leading ML methods using three main datasets. The first dataset consists of mock observations from the IllustrisTNG simulations, which acts as the training data and allows us to quantify the performance metrics of the detection methods. The second dataset consists of mock observations from the Horizon-AGN simulations, introduced to evaluate the performance of classifiers trained on different, but comparable data to those employed for training. The third dataset is composed of real observations from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) survey. We also compare mergers and non-mergers detected by the different methods with a subset of HSC-SSP visually identified galaxies.
Results: For the simplest binary classification task (i.e. mergers vs. non-mergers), all six methods perform reasonably well in the domain of the training data. At the lowest redshift explored 0.1 < ɀ < 0.3, precision and recall generally range between ~70% and 80%, both of which decrease with increasing ɀ as expected (by ~5% for precision and ~10% for recall at the highest ɀ explored 0.76 < ɀ < 1.0). When transferred to a different domain, the precision of all classifiers is only slightly reduced, but the recall is significantly worse (by ~20-40% depending on the method). Zoobot offers the best overall performance in terms of precision and F1 score. When applied to real HSC observations, different methods agree well with visual labels of clear mergers, but can differ by more than an order of magnitude in predicting the overall fraction of major mergers. For the more challenging multi-class classification task to distinguish between pre-mergers, ongoing-mergers, and post-mergers, none of the methods in their current set-ups offer good performance, which could be partly due to the limitations in resolution and the depth of the data. In particular, ongoing-mergers and post-mergers are much more difficult to classify than pre-mergers. With the advent of better quality data (e.g. from JWST and Euclid), it is of great importance to improve our ability to detect mergers and distinguish between merger stages.