Stellar streams around the Milky Way in the Legacy Surveys imaging data

img

The stellar halo of the Milky Way is full of debris streams — “stellar streams” — left over from dwarf galaxies and globular clusters that have been destroyed by our Galaxy’s gravitational field. These structures are diffuse, distant, and much lower density than foreground stars in, e.g., the Milky Way disk, which is why they don’t appear if you just plot the sky positions of all stars from a photometric survey like the SDSS or Pan-STARRS. One way to search for and enhance the significance of stellar streams is to filter photometric data using templates constructed from isochrones of stellar populations that have characteristics similar to the typical objects that we think disrupt to form the streams (typically low metallicity and old). By moving these color-magnitude filters in distance or distance modulus, we can then construct density maps of the sky positions of these filtered stars to look for linear features that persist or appear consistent between different distance slices. This has been done in the past using, e.g., the northern SDSS data, the full SDSS footprint, and Pan-STARRS.

This week, Nora Shipp (Chicago; who found most of the known streams in the southern sky) has been visiting the CCA to search for new streams, and confirm the existence of stream candidates, using data from the DESI Imaging Legacy Surveys. This survey is deeper and higher resolution than the SDSS, so we are hoping we will find some interesting new things. So far, we have produced filtered sky-maps in distance slices, and are beginning to correlate structures in our new maps with past surveys and discoveries.

One way of visualizing the data is to produce a “field of streams” map, by assigning density in different distance slices to different color channels. In the sky map figure shown above (in equatorial coordinates), the intensity of each pixel is related to the number of stars that pass our color-magnitude filter, and the color of each pixel is an indication of the mean distance of stars in that pixel. In this case, red pixels are ~20–60 kpc from the sun, green is ~10–20 kpc, and blue is ~5–10 kpc. Already with this map, we can see many of the known prominent streams in the northern and southern hemispheres (the widest and most prominent being the Sagittarius stream). Now that we have pretty images, we have to buckle down and try to characterize individual features!

Made-to-Measure modelling

img As a new Flatiron Research Fellow who stared a few weeks ago, most of my time here so far has been taken up with getting myself set up in a new institute, and a new country. But, I’ve also had time to continue some of my thesis work from a few years ago now, which involves using the Made-to-Measure (M2M; Syer & Tremaine 1996) technique to tailor galaxy simulations to match data. Or in this case, fake data.

With M2M, you start with some particle model of the desired galaxy, and then slowly alter the weights of the model particles in order to better represent the target system, while also letting the particles orbit in the galactic potential. While the traditional M2M algorithm runs on a ‘test particle’ system, where the gravitational potential is fixed and known, my algorithm PRIMAL runs on a live N-body model, where the gravity is calculated from the interaction of the particles themselves. So, because we change the particle masses, the potential also changes, and can theoretically be recovered. During my PhD I showed that PRIMAL can recover a simple galaxy model, from mock data constructed with a selection function based on the European Space Agency’s Gaia mission (Hunt & Kawata 2014), with the goal of applying PRIMAL to actual data from Gaia to create a new model of the Milky Way.

However, this previous galaxy model was relatively smooth, with only the galactic bar for non-axisymmetric structure. In reality the Milky Way has significant disequilibria features in both the positions and motions of the stars, some of which are likely owing to the interaction with satellite galaxies. So far it remains an open question how well PRIMAL (or M2M in general) can recover such features. As an initial test of this I am applying PRIMAL to a galaxy from the Auriga cosmological simulation, which formed through a series of mergers and is a more realistic target. The modeling is ongoing, and in the Figures here you can see the progress part way through the simulation. The left plot shows the time evolution of the density fit (upper panel), the kinematic fit in all three dimensions (2nd panel), the bar rotation speed (third panel) and the disc mass (lower panel). The right hand panel shows the radial profiles of the surface density (upper panel), the radial velocity dispersion (2nd panel), the vertical velocity dispersion (third panel) and the mean rotation velocity (lower panel), for the initial model (blue dotted line), the target data (black line) and the current state of the model (red dashed line). The model is now a considerably better fit to the target data than the initial model, but it is still converging and I won’t know how well we’ve done for another week or so.

Giant planets orbiting giant stars with TESS

img

Along with starting at the Museum last month, I also kicked off a search for (large, so probably gaseous) planets orbiting low luminosity (3-10 Rsun) red giant branch stars with TESS. First, I used MAST to create a target catalog of stars using TICv8 stellar parameters, making cuts on stellar radius and temperature which left me with ~40,000 TESS targets. I then used the eleanor pipeline in order to download individual sectors of data of targets that were in TESS sectors 1 and 2. From there, I set up a “quicklook” pipeline to turn these lightcurves into plots, where I can look for key features in the data to identify high priority targets for ground-based followup RV observations. The two key features I’ve been looking for are planet transits in the lightcurve, as well as stellar granulation and oscillation features in the Fourier transform of the lightcurve. Combining these features, I can precisely characterize the stellar mass and radius, and planet candidate radius in the system, which I will combine with RV measurements to confirm planets and measure their masses and eccentricities when possible.

The plot I’ve attached is the output of a “quicklook” plot for a particularly interesting target. The upper left hand panel represents the four lightcurves that eleanor outputs–a raw lightcurve, a corrected lightcurve, a lightcurve constructed using principal component analysis, and a lightcurve made using a set PSF model. The panel below it is the corrected lightcurve which has also undergone outlier rejection and has been smoothed by a 2-day median filter. Below that is the power in a BLS search as a function of period in this lightcurve, where the peak has been highlighted in a blue band. The bottom left plot is the corrected lightcurve, folded at the best-fit BLS period. This is where I look for a clear, phase-folded transit. The right panel is the Fourier transform of the lightcurve in the second left panel. Here, the features of interest are a slope corresponding to higher power at low frequencies, indicative of stellar granulation (or other red noise features), or a “triangular picket fence” of frequencies which are evenly spaced at a particular pattern, which correspond to solar-like oscillations in these giant stars.

On the left hand side of this figure, you can see a transit-like feature which has been identified by the BLS search. The duration and shape of this figure would be consistent with a Jupiter-sized planet transiting an evolved star. On the right, a slope from low power to high indicates a potential granulation noise signal. Furthermore, a bump in the Fourier spectrum is visible around 200 microHertz, which could potentially correspond to stellar oscillations that would indicate a 3-4 Rsun radius for this star. However, this power excess is a little too noisy to do proper asteroseismology. Luckily, this was a CVZ target, and thus we could combine 12 sectors of observations, which showed that the planet transit is real, but the oscillation bump is not. Now to search tens of thousands more lightcurves for a candidate with both asteroseismic and transit signals!

Removing HST Systematics based on Our Knowledge of K2 Systematics

This week, Rodrigo and I have been working on using our knowledge on removing instrument systematics from Kepler to better understand how to remove instrument systematics from HST. In my research I use HST’s Wide Field Camera 3 to observe transiting exoplanets in the infrared, and use the spectrum to try to identify water in the upper atmospheres of planets. This is a tall order, seeing as the signals we are looking for are <200ppm, and the instrument systematics can easily introduce correlated noise at the 0.1% level. Because of these large instrument systematics, it can be hard to trust small water detections. To address this, we’re interested in removing instrument systematics from HST data in a more bayesian and principled way. Rodrigo and Dan have both been teaching me a lot about how to do this!

The “figure” I’m including is a movie of the frames from HST observations of HD209458b. The spectrum of the object has been scanned up the detector over the course of ~20 seconds, and then a new image is taken during the course of the transit. There are several systematic effects making this data imperfect. The scan rate is not exactly even, causing there to be a “wave” pattern during the scan going up the detector. There is a geometric distortion which causes the dispersion to be slightly tilted in the x and y axis. The flat field for these pixels hasn’t been removed, and there is clear structure that is unrelated to the observation. Finally, over the course of the transit many of these images are taken, and they are not perfectly registered, causing a drift over time. All of these effects add up to a large systematic component.

This week Rodrigo and I have used similar techniques to his PLD pipeline to iteratively fit for the stellar spectrum and the instrument effects. We are able to fit out the distortions, pixel flat field and image registration errors, and find a transmission spectrum that is consistent with the literature spectrum. Our next steps are to ensure that we are not over fitting, and add in sampling so that we can find robust errors on the spectrum, marginalizing over the instrument systematics. Watch this space!

Fitting single transits

img

Shortly before group meeting, I asked Twitter what I should work on today (see above). But, working on writing the paper is much less fun than thinking about new features so I went through a derivation of the right “priors” to use when characterizing a single transit. I have previously published on this topic (as have many others!) and the basic idea is that even if you only detect a single transit you can place some constraint on the orbital period because the duration of the transit is informative. This constraint on the period tends to be weak because the transit duration is covariant with all of the properties of the system (especially eccentricity, impact parameter, and stellar density). Recently, David Kipping pointed out that the fact that the planet transits is a useful datum when fitting a system like this. However, the full story of how this information can be included in the fit of a single transit is slightly more complicated than presented in that note. So at group meeting I shared my derivation and got some feedback about how to present it clearly. The main argument is that you want to make inferences about p(parameters | light curve, the planet transits) instead of just p(parameters | light curve). Then since the light curve and the fact that the planet transits are conditionally independent (given the parameters of the system), the joint probability factors as p(parameters) p(light curve, the planet transits | parameters) = p(parameters) p(the planet transits | parameters) p(light curve | parameters) and p(parameters) p(the planet transits | parameters) looks like a “prior”, but it isn’t separable (it correlates the orbital period with all the other parameters). This argument could also be applied to systems with more than one observed transit, but it looks like the effect would be negligible in most cases. Stay tuned for a tutorial with more details!

Finding Anomalies in Galaxy Surveys

img

I have been working on a new method for detecting anomalies in astronomical images with deep learning, with Marc Huertas-Company (Observatoire de Paris) and Alexie Leauthaud (UCSC). The idea is to use unsupervised learning to pick out potentially interesting (or “anomalous”) objects in galaxy surveys, as there is way too much data to look through by hand; the lack of supervision also means we can find interesting things we may not know to look for. I first trained a Generative Adversarial Network (GAN) to generate fake images that are similar to the real images in the survey. This “generator” will learn what the objects look like and their distribution. It will do a good job reconstructing images like those that are more common in the survey, and it will be worse at generating the more “anomalous” images.

The top set of images shows a random sample of objects. The first row is the real image, the second is the generator’s attempt at reconstructing it, and the third is the residual. The generator does a pretty solid job on the common blob-like galaxies. It has a harder time with the rarer, more extended objects (and the background noise). I let the generator try to reconstruct a million objects from the Hyper Suprime-Cam survey. Based on these residuals (and other features), I assigned each object an “anomaly score.” The plot shows the distribution of scores for all million objects; you can see a longer tail of high-anomaly-score objects, as there are more ways to be anomalous than to be normal.

I then make a cut to get the top-scoring objects, and performed some clustering on these. The bottom sets of images show a random selection of objects for four of these clusters, each labeled with its anomaly score. You can see that not only have we found more anomalous objects (these look qualitatively different than the random set), but we have also organized them a bit to help us identify truly interesting anomalies. The GAN identifies anomalies in both morphology and color. Some of the anomalies are bad detections (e.g. red cluster, 2nd and 4th columns) or empty images (greenish cluster, 3rd and 4th columns). But the last group definitely contains some interesting objects! The far right image is a galaxy with a strong tidal feature, which is interesting for galaxy formation studies. So far we have identified multiple galaxies with tidal features as well as some potential AGN; we hope to find other interesting objects, such as green-pea galaxies or even strongly lensed systems.

Up next is improving my GAN and clustering methods to better sort out the anomalies, and continuing to validate our method by cross-correlating our anomaly catalog with known interesting objects and pipeline errors.

The Transition Redshift

img

I’m working on a short reionization project where we want to figure out where a function crosses zero. I’ll spare the details of what this function (P21,g) is for the bottom of this post. But suffice it to say that the redshift where it transitions from being positive to negative (the “transition redshift”) is an interesting quantity. The plot shows a sketch of what the data might look like - where the function itself is not measured but the data still constrai

The issue is that the function we want to measure will be really hard to measure at these very high redshifts. But we’re not interested in actually measuring the value of the function – we just want to know where it crosses zero. From the data analysis side, these are two different questions. During the meeting, I discussed various ideas people have had that I’ve talked to. One simple way to proceed is just to fit some sort of reasonable function – like a logistic function – to the data, and sample over the parameters of the logistic function. Where the model crosses zero is then simply a parameter, and therefore one can back out a posterior distribution in a fairly straightforward way. We decided this will probably do just as well (or better) than some fancier methods.

Some more info about the function: During the early stages of reionization, a transition occurs where the 21cm signal and the density field go from being positively correlated on large scales to being negatively correlated. The sign can be measured in the cross-power spectrum between the two fields (the function, P21,g, spoken of earlier). In work in prep., we’ve shown (fairly easily) that the redshift at which this transition occurs (the “transition redshift”) is insensitive to the properties of whatever tracer you use of the density field (e.g., a galaxy survey or the properties of an intensity mapping field). The final piece of the project is simply showing that this transition redshift is measurable even though the cross-power spectrum isn’t.

Doppler Imaging

img

I’m generally interested in the problem of mapping the surfaces of stars and exoplanets, and while I’ve focused so far on doing this using photometric timeseries, for the past couple months I’ve been tinkering with spectroscopic data and the idea of Doppler imaging. The basic idea here is to take high resolution spectra of a rapidly rotating star at different rotational phases and look at the distortion of its spectral lines as starspots rotate from the blueshifted hemisphere to the redshifted hemisphere. One of the seminal papers on this method is Vogt et al. 1987, who did an experiment where they ``painted” the letters VOGT on the surface of a star, generated a bunch of synthetic spectra, and attempted to recover the stellar surface map from that dataset. The authors devised a method that successfully recovered the VOGT pattern, and this has since been adapted and applied to dozens of real systems, often quite successfully.

However, the techniques employed in Doppler imaging are often computationally expensive, as they require running radiative transfer models in thousands of discrete cells on the surface of the star, then numerically integrating the specific intensity to arrive at a model for the observed spectrum. Most studies therefore focus primarily on optimization: given a fiducial set of stellar parameters (such as inclination, angular velocity, differential rotation parameter, etc.), the goal is to find the single surface map that best matches the observed spectrum, usually via a non-linear optimizer and using priors that make convergence fast. While this works well, it can mask the great degree of uncertainty inherent to the problem, hide the various degeneracies at play, and introduce biases due to the fact that parameter like the stellar inclination are rarely known precisely.

To improve on this, I’ve been working on extending my starry code (Luger et al. 2019 and github.com/rodluger/starry) to work with spectra by expanding the wavelength-dependent stellar surface map in spherical harmonics. I worked out some cool analytic results that allow me to not only compute a model for the spectrum analytically, but also to invert the problem and quickly solve for both the maximum likelihood surface map and its uncertainty. The figure above shows a test of my code on the classic VOGT star. The 16 epochs of data are shown on the left, next to the phases of the star seen by the observer. The input map is shown at the top right, with the inferred map below it, and the intensity uncertainty at the bottom.

Python Toolkit for Background-Subtracted TESS Lightcurves

img

Over the summer I’ve been working with Dan Foreman-Mackey and David Hogg on developing a Python toolkit to obtain background-subtracted TESS lightcurves from the Full-Frame Images (FFI). One of the goals of this project is to detect transient events in the TESS data by doing difference imaging without constructing a reference image. We do that by implementing a method called Causal Pixel Model (CPM) Difference Imaging explained in detail here and here. CPM predicts a specific pixel’s lightcurve using a linear combination of other pixels’ lightcurves with the idea that if a specific pixel’s lightcurve variation can be explained by other pixels’ lightcurves, it’s probably a systematic background effect. In the figure attached above, we’re trying to predict the white pixel’s lightcurve in the middle of the image (top middle image) using the lightcurves from all the black pixels. The lightcurve in the middle of the figure shows the white pixel’s lightcurve (black line) and also the CPM’s prediction (red line). The blue lightcurve below that is the difference between the data and the model, showing that CPM does a pretty good job of removing the the large systematic effects (caused by scattered light from Earth). We still have a few problems to solve such as figuring out how to choose good predictor pixels (we don’t want our predictor pixels to have transits in them), but we’ve tested this model on several different sources (including supernovae) and our preliminary results are looking promising! The toolkit is still in development but we’re planning on making it available to the community soon and excited to get some feedback.

The straight stellar stream in Cen A

img

I recently attended the IAU Symposium 355 “The Realm of the Low Surface Brightness Universe”, where I had the chance to get an update on the current observational status of, among other things, detecting streams in external galaxies. Denija Crnojevic showed her map (Crnojevic et al. 2016) of the nearby galaxy, Centaurus A (see left panel of the Figure above). This lead to some discussion on the “straightness” of the stellar stream emerging from the dwarf galaxy referred to as Dw3 in the figure. Some people found the straightness odd and argued that the stream must be part of a “plane of satellites”. I thought, instead, that maybe the stream is straight because it’s close to apocenter and has a large velocity component in the direction of the plane of the stream. To test this idea, I generated “mock-streams” using Fardal et al. (2015)’s “particle spray” method implemented in gala (Price-Whelan et al. 2018) to see if I could reproduce the overall morphology of the stream in a simple potential with a disk, bulge and dark matter halo. Specifically, I attempted to reproduce the stream’s S-shape, width, length, its lack of curvature, and its projected separation to its host, Cen A. After a few attempts where I used 1) the s-shape to determine the direction of motion, 2) the width of the stream to estimate the mass of the progenitor, 3) the length of the stream to estimate the age of the stream and 4) where I fixed the distance of Dw3 to be 60 kpc above the disk of Cen A (as observed in projection), I managed to generate a stream which looked very similar to the observed stream (see the right panel of the figure above). The blue line shows the last 4 Gyr of evolution of the progenitor’s orbit, and the blue points show a mock-stream evolved forward from the endpoint of that orbit until present day. This simulated stream is definitely not the final solution, but with some tweaks to make to potential more similar to that of Cen A (right now I used a MW type potential) and after obtaining kinematic information from Denija’s collaborators, we hope to get a good match to the system. The point here is simply that finding a somewhat reasonable solution wasn’t difficult, and that there’s nothing odd about this stream.