In 2025, UVEX started taking up more and more of my time on the mission development and engineering front. Even so, it’s always good to look back and take a moment to appreciate the science that I was able to contribute to last year. This started off as a a quick “I’ll do this on a Sunday” exercise. However, as I got into it, I was having too much fun re-reading the papers and writing up the little blurbs below. This turned into a daily 15-20 minute writing project over coffee or tea for each of the papers below.

The Universe of Papers

I went to the NASA ADS server and issued a query for all of the papers with year:(2025) author:”grefenstette” and filtered down to only refereed papers. The rest are largely alerts that I or members of the NuSTAR SINGS collaboration have sent out via the General Coordinates Network (GCN) or Astronomer’s Telegrams notices for things that were found in the NuSTAR data. These are largely “Hey, we saw this interesting thing” content rather than “Hey, we did some cool science”.

This leaves us with 14 science papers for the year. A majority of these I worked on in a consulting role. Two are UVEX science papers (the first ones!). Publications went into the Astrophysical Jourrnal (ApJ), the “Lettters” version of ApJ, Publications of the Astronomical Society of the Pacific (PASP), and Physical Review Letters (PRL). Below the papers are listed in whatever reverse chronological order that ADS uses.

The Papers

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…995..189H/abstract

Lead author: Yirfan Hu

Summary:

This was a paper written by one of the Summer Undergraduate Research Fellows (SURFs) at Caltech last summer. In an impressive display of efficiency, Yirfan did a literature survey, got an observation approved of an exoplanet-hosting star, AU Mic, analyzed the data, and wrote the paper all in the 8 weeks of a summary program.

The goal of the paper is to try to fold in hard X-rays from NuSTAR along with EUV and soft X-ray coverage from Swift-XRT and the Einstein probe to determine how much flux is being absorbed by the atmosphere of the exoplanet. It turns out that dwarf stars are the easiest ones for us to discover exoplanets around (because the planets are closer to their host star, they orbit more frequently so we get more chances to see them pass in front of their host star). However, they’re also in the danger zone from stellar flares (because they’re so close into their host stars). The paper is incredibly well written and discusses the analysis of two flares seen by NuSTAR in the observation and the implications for these data on how much energy the flare produced (and therefore how much flux the exoplanet probably saw).


My contribution

Largely here I served as a NuSTAR data analysis consult (Yirfan was primarily working with Murray Brightman, another research scientist in the High Energy Astrophysics group) and on how to interpret the stellar flare data. I’ve written a number of papers discussing observations using NuSTAR of our own star and stellar flares on nearby young stars, so I helped a little bit with interpreting the data and provided some feedback on the overall presentation in the paper

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…994..169L/abstract

Lead author: Songwei Li


Summary:

Songwei was a graduate working with Renee Ludlam at Wayne State University who has since successfully defended his thesis. This paper is based on NuSTAR data both from focused observations and from the NuSTAR StrayCats Catalog. There’s a nice JPL write up on using the stray light data (usually a bad thing) for “extra” science in NusTAR. The key take away here is that it is really hard to get lots of monitoring observations with space telescopes, so we take whatever we can get. In this case, we were studying GX 340+0, which is a neutron star orbiting a “normal” star. The two stars are orbiting so close together than the neutron star rips material away from its companion, resulting in clouds of hot, dense gas that glow in the X-rays as the material flows onto the neutron star. These systems are known for having pretty wild variability (think flickering) with different regions of the gas cloud glowing in different parts of the X-ray spectrum. GX 340+0 happens to be near the Galactic Plane, so we get stray light from this source when we observe other systems in the Plane that are nearby why projected onto the sky.

Songwei analyzed a large number of the StrayCats data as well as data from this target in the NuSTAR data archive at the High Energy Astrophysics Archive hosted at NASA’s Goddard Space Flight Center to study the source variations over time. Even via stray light data, NuSTAR still provides better energy resolution (the ability of the telescope to split out different colors) than most other instruments in the hard X-ray band. The general result is that these systems are extremely complicated (imagine taking a snapshot of a campfire at one point in time and then trying to figure out how wood burns) and the new data let you study how the system changes over decades.

My contribution:

I’m the PI for StrayCats and wrote a lot of the analysis tools (via nustar-gen-utils) that are used for analyzing these data. It’s both more complicated and simpler than “standard” NuSTAR analysis because you only have to consider the detectors (the camera) without thinking about the opitcs (the telescope). But the signal is a lot lower, so you have to be clever about how you think about the background light (both from the sky and in the camera itself). So my contributions here included suggestions on how to deal with the background as well as helping make sure that various analysis steps were working as advertised. I’ve written a few papers on systems like this before, but Renee is one of the world’s experts on neutron star systems and interpreting the X-ray spectrum (the “X-ray rainbow”that we see from these systems) to figure out what’s actually going on.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…993L…6N/abstract

Lead author: Nayana A. J. 

Summary:

Nayana is a postdoc at UC-Berkeley working with Raffaella Margutti. There’s another paper down below on supernovae. This one is on AT 2024wpp (AT = astronomical transient, with the YYYYabd giving the year of discovery and the letters cycle through the number of transients that were identified). This is part of a class of new transients called “Fast Blue Optical Transients” (FBOTs). The prototype was the fortunately named AT 2018cow. We don’t really know what these things are yet, as noted in the Press Release associated with this paper. One idea is that these are newborn neutron stars, born in the collapse of a massive star at the end of its life during a supernova explosion. As material from the supernova falls back onto the neutron star, the material lights up in the X-rays, radio, and lots of other wavelengths. The paper is a little dense (it has to be…we’re trying to piece together what happened by , but the discussion section is worth going to ready (and there are some cartoons showing the emission mechanisms). These FBOTs are a new phenomenon (2018cow was the first one, and they’re hard to find), but something that’ll be even more exciting in the UVEX-era.

My contribution:

I didn’t get as much done on this paper compared to others. I’ve worked with Raf for a while now both on 2018cow as well as on other supernovae. For this one, I probably barely did enough to be added as a co-author (only through lack of time, not lack of interest!).

Paper link: https://ui.adsabs.harvard.edu/abs/2025PhRvL.135n1001R/abstract

Lead author: Jaime Ruiz

Summary:

This paper is the result of many, many years of analysis, re-analysis, and updating simulation work. It was the main reason that NuSTAR originally looked at the Sun. The general gist of things is that we can use NuSTAR to address questions in particle physics as well as study supermassive black holes (I’m kind of proud of that line). Hannah Earnshaw (now the NuSTAR Project Scientist) did a great summary write up of a number of recent papers on the topic.

The general idea here is to postulate an “axion-like particle” (or, ALPs), which is a candidate for being the particle that makes up the dark matter in our Universe. There’s a long back story on these particles (and, in general, on the potential dark matter candidates). The general idea here is that ALPs can be generated in the center of the Sun. When they do, they get to “escape” the Sun without having to bounce around in a thermal process to get out from the core (because “bouncing” is another name for “electromagnetic interactions” and ALPs don’t do that). However, when they emerge from the Sun they do have an opportunity to react with the strong surface magnetic fields and convert back to X-rays. At a 2016 NuSTAR science team meeting, this was presented as a potential “ghostly” image of the Sun in X-rays.

I’m burying the lede here, but we haven’t seen anything (yet). This paper was originally based on “quiet” Sun observations taken way back in 2014. Those observations were a mosaic (meaning “multiple NuSTAR images merged together” of the Sun during a period where there was (relatively) little solar activity. The Sun goes through 11-year cycles of “peak flaring activity”, with the most recent one just winding down now. If you scroll back 11-years you can figure out that in 2014 we were near solar max, so the ghostly signal was hard to search for because of all of the other X-rays being produced on the solar surface in sunspots / flaring regions. We redid the observation in 2020 during the intervening solar minimum to stare just at the core of the Sun. And didn’t see anything.

However, null results are important, too. In science parlance, they put an “upper limit” on how bright the signal could have been for us to not notice it. In the case of ALPs, it’s even more complicated, because what we’re actually doing here is trying to put a limit on how often an ALP can interact with the Sun’s magnetic field…and we don’t really know the Sun’s magnetic field that well, either! So, the hard part is really figuring doing really robust data analysis to place the best limits on the X-ray signal. And then going to do really robust modeling of the Sun to try to figure out what limits one can place on the ALP interaction (which is what we’re after here).

The paper languished for about 10 years because it’s hard. But it’s finally hit publication, which is great because it takes a nagging “We should finish that paper” out of the back of my mind, which has been there ever since 2014.

My contribution:

I was the lead for all of the data analysis tools that let us figure out how to track the Sun across the sky with NuSTAR, convert the NuSTAR “astronomical sky” data into “heliocentric coordinates”, and then figure out how to turn the resulting data back into usable science units. Additionally, you have to actually do something clever with the data, because the sky isn’t (that) dark in the X-ray band and you get light from distance black holes leaking into your images. The analysis in the final paper was largely re-done by one of Jaime’s students based on work that David Smith (who was also my Ph.D. advisor at UCSC) and his students started years ago. Without getting too into the weeds, figuring out how to say how confident that you are that something (the signal) is not there depends on how confident you are in understanding what something (the background) that is there. I think we ended up in a good place in the paper, and Jaime and his team did all of the heavy lifting on the solar magnetic field modeling to generate the final upper limits.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…990…23L/abstract

Lead author: Fiona Lopez

Summary:

Fiona is now a graduate student and New Mexico State University, but this paper was the culmination of impressive work done with Dan Wik at the University of Utah during an NSF Research Experience for Undegrads (REU). Fiona was looking at an issue that occurs when pointing X-ray telescopes at galaxy clusters in the distance Universe. Galaxy clusters are large groups of galaxies (with hundres or thousands of members) that, even as far away as they are, can be resolved (meaning they don’t look like a single point source) by X-ray telescopes such as Chandra, XMM-Newton, and NuSTAR (among others). Missions like eROSITA explicitly use the search for distance galaxy clusters and mapping their properties (like the temperature of the gas between the galaxies in the cluster) over cosmic time to understand the evolution of the Universe. However, there’s a problem. X-ray telescopes are notoriously hard to calibrate so that one telescope reports the same results as another. Fiona looked at a number of galaxy clusters, breaking down code that the missions tend to provide “as is” (including the NuSTAR code) into parts that can be tuned for particular analyses. This is an example of a very hard technical problem that has fundamental implications for our understanding of the Universe through fairly subtle improvements in our understanding of how the telescopes work.

My contribution:

Largely I was on this paper as a technical consult for NuSTAR, providing additional info on how the standard processing tools work and, as the NuSTAR instrument scientist, someone who understand the effects (and limitations) of the calibration of the detectors as well as the telescope as a whole.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…989..202E/abstract

Lead author: Hamza El Byad

Summary:

One of the most surprising, and scientifically valuable, results from NuSTAR so far has been the discovery of “ultra-luminous pulsars”. These are bright X-ray sources in other galaxies that pulse like a light house in X-rays, but the pulses are so bright and releasing so much energy that they can break our understanding of how fast material can flow onto a neutron star. However, this discovery was only possible because a supernova happened to go off in the nearby galaxy M82 (a.k.a., the Cigar Galaxy, for Messier catalog aficionados) and NuSTAR stared at the galaxy for almost a month back in 2014 trying to detect the faint X-rays from the supernova. And Matteo Bachetti, who was a postdoc at the time, decided to go look and see if the other, previously known X-ray sources in M82 were doing anything interesting. It’s the closest to a “Eureka!” moment or a “Huh, who ordered that?” moment I’ve ever been around in my scientific career.

This paper (by Hamza, who is working with Matteo at the University of Cagliari) is the culmination of what Matteo was originally trying to do; study the flickering of the bright X-ray sources in M82. Of course, now we have both that original month of data as well as weeks and weeks of monitoring data of these sources as NuSTAR and other telescopes have revisited M82 over the last decade. Some of these sources are (probably) black holes and the flickering signals can tell us about how the material falling into these systems behaves. In general, you see something called an accretion disk, which can show “ringing” behavior in the flow of the material, which results in the changes in brightness. If we think about an analogy of turning the X-ray light into sound, a “periodic” signal would be a guitarist hitting a single note. A “quasi-periodic signal” would the guitarist hitting a note and then bending the string to change the pitch over time. In black holes, the initial tone and its width (how much the string is bending) can tell us about things like the black hole mass (which is expected to be in the 10s to 100s of times the mass of our Sun in these systems, which is why it was so unexpected to find a neutron star that only has a mass of a few solar masses instead).

Doing all of this analysis requires building a lot of analysis machinery because the signals from the sources can be blended together (think about multiple guitars playing at once) and we’ve only got snapshots of observations (think someone turning the volume on a P.A. system on and off) over time. The results ended up being that the X-ray sources are more or less behaving as expected, but also that the one system that we know contains a neutron star also show the same timing signatures as the ones that we think contain black holes (NB: it’s still possible that all of these sytems are actually neutron stars). Astronomers are always trying to make inferences based on incomplete data, so it’s a good reminder that we always need to try to poke holes in our standard interpretation of what’s going on in the Universe.

My contribution:

I’ve been working on some of these data on and off for the last decade, have been working with Matteo on the timing calibration of NuSTAR, and did some early work on the analysis of the M82 X-2 (ultra-luminous pulsar). I gave a little bit of scientific feedback, but I was on this paper largely as part of the bigger collaboration that’s been working together for over a decade rather than doing any of the actual work in this paper.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…988..157B/abstract

Lead author: Peter Boorman

Summary:

Before we begin, let’s just all agree that astronomers like to come up with cute names. At its heart, astronomy is a taxonomical science where people take loads of data and try to match up things that seem like they’re the same under the same banner. In the natural sciences of the 19th and 20th centuries you can see similar efforts to taxonomically classify Nature, with species labeled with (sophisticated sounding) Latin names. In the 20th century, Astronomy used rather dry terms like Supernova Type II or Type Ia, Seifert Galaxies Type I and Type II (and Type 0.9…). In the 21st century, things have given way to more…creative…names as people have expanded their ability to produce acronyms. Hot DOGS (for, hot, dust-obscured galaxy) have given way to the “Little Red Dots” galaxies found by JWST, and “green pea” galaxies found in the Sloan Digital Sky Survey. The latter is really just descriptive (the galaxies are small, and blue/green in their color). Once you’ve identified a class of things, it’s then time to do more detailed observations to understand the astrophysics of each of the things and come up with a physical model of the system.

Peter (then a postdoc at Caltech a few doors down from my office) latched onto one such system and got X-ray observations with the XMM-Newton X-ray telescope. The X-rays (especially when paired with UV data) can tell you a host of things about the galaxy, such as how many stars the galaxy is forming, or whether the supermassive black hole at its core is actively eating the gas from the galaxy, turning the gravitational energy of that gas into X-rays as well as outflowing streams of material that affect the evolution of the galaxy. In this case, the X-ray data do show evidence for a supermassive black hole in this geen pea galaxy; this is the first time such a system has been identified, so any physical model of what the green peas actually are (and how they behave) has to allow for both a supermassive black hole at their center as well as a history of the galaxy over cosmic time that allows for the black hole become active (and not rip apart the galaxy).

My contribution:

My contribution here is largely as a hallway expert on “what else could it be?”. In papers like this where there’s a plausible explanation you also have to try to rule out everything else that could be a plausible explanation. I helped (a bit) with looking at whether the X-ray emission that XMM-Newton observed could have been associated with something short-lived (like a supernova, which also glows in the X-rays and can easily outshine a galaxy for a week or two) rather than the supermassive black hole (which we expect to be active for millions of years rather than a few weeks). Peter found a 2013 observation as well as the 2020 data that were the main thrust of this paper and there’s relatively little variation over the ~7 years between the two time periods, meaning that whatever is making the X-rays needs to be fairly stable over time periods that long. That rules out a supernova explanation, and is one the elements that supports the model of this system hosting a supremassive black hole.

Paper link: https://ui.adsabs.harvard.edu/abs/2025PASP..137g4501S/abstract

Lead author: Leo Singer

Summary:

This is one of the first UVEX papers to be published. One of the science motivations for UVEX is to have a wide-field, UV imager to follow-up gravitational wave sources (like this one, which remains the only real sample we have). The problem is that the gravitational wave observatories can tell that something happened, but not precisely where something happened. To do that, you need to search the large regions of the sky where the source could have come from. And you have to do it quickly, because the signal that you’re looking for will be fading rather quickly (in gamma-rays the whole thing is over in a few seconds, while at longer wavelengths the physics at play means that things will last a few days…we think).

There’s a companion paper below (one of many for this group) which, like this one, is thinking about how to tile the error boxes and how to optimize the search pattern. Leo has been working on this kind of thing for years, and brings computer science chops to the party that are particularly impressive. If you like reads on novel takes on how to solve this problem (which isn’t too far off from the traveling salesman problem), this is the paper for you.

My contribution:

At the time I was the Project Scientist for UVEX and I wrote the majority of the instrument simulator that lets you answer the question “What will detect when we point UVEX in this direction of the sky?”. There will be much more to come on this topic as the design for UVEX matures and we improve our knowledge of things like “How fast can we talk to the satellite?” and “How fast can the satellite slew around the sky?”.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…987..180G/abstract

Lead author: Brian Grefenstette

Summary:

Hey, look, it’s a first author paper! This paper is one that I’d been working on for a couple of years. The source (GS 1826-238) is a famous one; X-ray sources inherit their names from the mission that discovers them. In this case, “GS” stands for “Ginga Survey” from the Japanese “Ginga” satellite back in the 80s. The numbers that follow are the location in the sky, in this case Ginga thought the source was located at approximately 18h 26m R.A. and -23d -8m Dec., which is pretty close to where modern X-ray observatories place the source. GS 1826-238 shows thermonuclear bursts, which are short (100s of seconds) bursts of X-rays that we think originate from the star’s surface as the material there reaches a critical density to ignite fusion from H to He (or He to heavier elements). The fact that there’s a surface at all implies that these things aren’t black holes, so we know that these types of sources are neutron stars that are stealing material from their companion stars.

GS1826-238 is also know as “the clocked burster” because it show regularly repeating X-ray bursts. The interpretation there is that the source is accumulating enough material to create the right conditions for the thermonuclear bursts every six hours (which is kind of incredible). It’s been extensively studied in its “clocked” state. Then, in 2014, the source changed. We don’t really know why, but the clocked bursts stopped and the X-ray colors from the source changed. I worked with Hazel Yun, who was a Summer Undergraduate Resarch Fellow (SURF) here at Caltech, on the StrayCats data from this source. This let us monitor the behavior of the source over the ~decade since it transitioned without needing a dedicated observation. The StrayCats data are great, but they didn’t really give us the whole story. So we also applied for, and got approved, a focused observation using NuSTAR. This paper is a summary of that observation. The source is incredibly bright (which is why we can monitor it in stray light), so it required a lot of work to understand how the X-ray cameras on NuSTAR were working, especially through the X-ray bursts that we saw.

The result is that it looks like the source is behaving like a known class of neutron stars, called “atoll” sources (yes, like the island…these things have “island” states and “banana” states based on how they change over time and I won’t get into why they’re called that but you can always ask me). We also were able to use NuSTAR to watch how the burst changed over time, which had some interesting implications for what was burning (H or He) and where the material was burning on the surface on the star.


My contribution:

I’m the lead author here, did all of the data reduction and analysis, and wrote the majority of the paper.

Paper link: https://ui.adsabs.harvard.edu/abs/2025PASP..137e4101C/abstract

Lead author: Alexander Criswell

Summary:

This is a companion paper to Leo’s above. This concentrates on UVEX observations of neutron mergers that are first detected by gravitational wave observatories. Rather than concentrating on the nuts and bolts of how to solve for the solution, here Alex is folding in our best* understanding of things like the expected 3D distribution of these sources (how far away they are and how well we think the gravitational wave observatories will be able to localize them) and folds this in with the practicalities of real observations on the sky (do the sources fall into regions of the sky with lots of foreground light? How long does UVEX need to look to be able to see really faint targets?). The results give us some sense of how UVEX might end up selecting which targets to try to follow-up (if we were in a situation were there were loads and loads of neutron star mergers to choose from during the UVEX baseline mission in 2030-2032).

*extrapolating from a sample size of one…

As we get closer to launch, we expect to revise papers like this to account for any updates in the UVEX bandpass designs (e.g., how well can we get rid of the sunlight reflecting of dust in the solar system?), how fast can we slew UVEX across the sky, and how fast can we get commands up to the observatory to go chase down these target. And, of course, if we happen to actually get another merger before UVEX launches to help pin down the uncertainty in the underlying physics that we’re using to estimate how bright the source is.

My contribution:

The same as for Leo’s paper above. I wrote the majority of the instrument simulator that lets you answer the question “What will detect when we point UVEX in this direction of the sky?”. This includes folding in the spatial distribution of what the UV sky looks like in different directions (and what the color of the light looks like, which is important to figure out what UVEX will see).

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…985…51N/abstract

Lead author: A.J. Nayana

Summary:

SN2023ixf is one of the more unique nearby supernova that we get to study. It went off in M101 (the whirpool galaxy), which is such a usual target for amateur astronomers to image that we have precise measurements of the time the light from the exploding star first reached earth (with an accuracy of 5 minutes…it’s usually days). We did a nice write up on the earliest NuSTAR observations of this source, which were able to track the blast way as it was making its way through the material ejected by the host star only decades before it exploded.

In this paper, Nayana undertook a pretty epic analysis task by adding together data from almost every X-ray telescope in the sky as well as observations rom radio telescopes on the ground. This we call “multi-wavelength” astronomy in the biz and it gives you a holistic view of what’s going on. Each type of telescope is giving you a slightly different piece of the puzzle (there’s a good Buddhist parable about blind men all touching an elephant and reporting back what they find…usually astronomers only get one blind man touching one part of the elephant). When you add together all of the different pieces of information in this case, you can figure out that the last few centuries of life of this supernova were chaotic as it ejected parts of its atmosphere as the massive star was starting to collapse in on itself towards a supernova.

My contribution:

I was the PI for the original observations of SN2023ixf (which went off on a Friday literally the day that we had submitted am major proposal…) and did some of the early NuSTAR data analysis work. Nayana reproduced a lot of that herself in this paper, and I contributed mainly as a consult after the first data were taken.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…979L…6M/abstract

Lead author: Lea Marcotulli

Summary:

Lea was a postdoc at Yale (but largely visiting Caltech) when she wrote this paper on one of the brightest distant galaxies in the X-rays. The NuSTAR News nugget has a really good write-up of what we think happened. The source is a quasar (“quasi-stellar objects” were named in the 1950s before we really know what they were…now we know they are massive black holes in the centers of distance galaxies), which produces copious amount of emission across the electromagnetic spectrum. In the X-rays, it is bright enough that it can easily be seen even though it is extremely distance. (Just a reminder, because of the finite speed of light, things really far away are like a time machine for us to see things as they were a long time ago…for those in the know, this source had a redshift of 6.19 and we were seeing it less than a billion years after the Big Bang). At such early times, a lot of the evolution of galaxies hadn’t happened yet and we think that the merger of galaxies is how you make really big black holes.

In multiple observations of this massive star, Lea found that the brightness was varying significantly. Due to things like time dilation (yes, that scene in Interstellar didn’t get it quite right, but you get the idea), variations that we’re seeing over months with our telescopes in orbit around the Earth were happening much faster in the galaxy itself. Couple this with the extreme changes (the brightness more than doubled) and you’ve got an interesting probe into how active these massive black holes can be in the early Universe.


My contribution:

Lea was sitting down the hall while she was working on this paper, so my contributions largely came with a “does this analysis look about right?” when trying to figure out if there was anything pathological in the NuSTAR data that could artificially make the source look twice as bright (spoilers, no there’s not).

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…979…72W/abstract

Lead author: Jooyun Woo

Summary:

I have a long history with Cassiopeia A. It’s the remnant of a star that exploded in the late 1600s, sending out material into interstellar space. You can think of supernova explosions like this as “recycling” elements heavier than He into the galaxy to make things like the Earth (and us). There’s a great periodic table of the elements showing their origin, which is probably worth its own post at some point. For Cas A, this includes radioactive elements fused together in the collapsing star before it exploded (which was part of the key NuSTAR science goals when it launched) as well as more mundane gas that streaming away from the star in a blast wave that’s still screaming through the galaxy at over 10,000 km per sec almost four hundred years after the star exploded. Where the blast wave is hitting material in the galaxy it produces a shock wave that, combined with twisted magnetic fields at that interface, can accelerate electrons to nearly the speed of light. Those electrons, in turn, lose their energy by releasing light (X-rays, in this case).

In this paper, Jooyun used NuSTAR to see how the star has changed over the ~decade between the original observations that NuSTAR took in 2012-2014 and new, deep observations taken in the early 2020s. This is one of the rare cases where you can actually see something that’s several light-years across in the sky changing over time. There’s a great NuSTAR News article describing the results in more detail, but the punch line is that that change that NuSTAR sees tells us about where, how, and when particles are being accelerated in the shock wave.

My contribution:

Analyzing images like this is complicated, especially for a telescope like NuSTAR. And figuring “what changed” over time is also hard because you’ve got to understand things that changed in the instrument and in our analysis software and try to distinguish this from what’s changing on the sky. Jooyun and I got to sit down at several American Astronomical Society meetings to go through the details of what was going on. Which was a lot of fun, because I hadn’t had a lot of time to go back to Cas A in years and Jooyun did an amazing job working through all of the details of these observations.

Paper link: https://ui.adsabs.harvard.edu/abs/2025ApJ…978..118B/abstract

Lead author: Peter Boorman

Summary:

This paper is a long-time coming analysis of a truly staggering set of observations. Peter writes incredibly detailed (and yes, dense) papers that thoroughly sift through all of the data to tease out the physics of what’s going on. NuLANDS is a sample of over a hundred (relatively) nearby supermassive black holes that live in the centers of galaxies. This is part of a “Legacy survey” that NuSTAR performs as a service to community. The survey is designed to look at galaxies that have been observed by the infrared and show signs in that waveband that their central black holes are actively eating gas from the galaxy (an “active galactic nucleus”, or AGN). When we observe this sample with NuSTAR, Peter can then use the X-ray data to figure out how much of the energy is coming out in X-rays (compared to the infrared) and what’s going on in the physical environment closer to the black hole.

It turns out that in a lot of these nearby black holes, the black hole itself is hiding behind gas and dust close to the black hole, obscuring the X-ray light (the infrared light comes from further out). Peter did a news conference at AAS in January 2025, which got picked up by a lot of astro news outlets (like this Space.com article). It’s worth a read. But the general idea is that if you look out at the night sky with X-ray eyes that can’t see through a lot of gas and dust, then you’d be undercounting the number of black holes in the Universe (by a lot). How and when (over cosmic time) these black holes are active is an essential piece of information about the evolution of the Universe.

My contribution:

Mostly helping support Peter’s data analysis here and giving some feedback to the paper itself. This work has been going on for years and required literally hundreds of observations to be planned and executed, so a lot of the core NuSTAR team (both the operations and the instrument team) were invited onto this paper in recognition of that effort.

Retrospective:

That was a fun project that consumed about three weeks of writing time. I’m finishing it now on Jan 17, having started basically on New Years day. I’ve enjoyed the “daily writing” aspect of it and it’s been fun to go back and look at some of the work that I’ve been involved with (either directly, or tangentially) over the last year. Several of these papers took years to mature from idea to observation to implementation. And the topics span from astro-particle physics all the way to what one might call “standard” astronomy (counting and classifying things).

I’m honestly not sure what 2024 looked like science-wise, so maybe it’s a good time to work backwards through time and do the same thing again(rather than re-wind all the way back to 2008 for my first published paper and work forward). But definitely I’ll think about doing this again for 2026 and beyond (since I imagine the type of papers that are going to come out of the UVEX work is going to broaden out the science horizons even more).

Leave a Reply

Discover more from Brian Grefenstette

Subscribe now to keep reading and get access to the full archive.

Continue reading