- Okay. So our next talk is by Rachel Fernandes. So Rachel, you've got a 15 minute talk. You got your slides up and I think you're good to go. - Okay well, thank you, Josh. Hello, everyone. My name is Rachel Fernandes and I'm a fifth year PhD candidate at the Lunar and Planetary Lab at the University of Arizona and my talk today focuses on my effort to detect transiting planets in young stellar clusters and moving groups. So let's first start by defining what ADA earth is. ADA earth is the frequency of Earth-sized planets in the habitable zone for something like star. Now, Kepler discovered thousands of exoplanets, most of which are closer to their star than mercury is to our sun. Additionally, Kepler wasn't able to detect any reliable Earth-sized planets in the habitable zone, which means that we couldn't measure ADA earth directly. As a result, we have to rely on extrapolations on this more abundance small short period planet population in order to calculate ADA earth. A 2018 study led by Gijs Mulders simultaneously for the occurrence of small close in planets and extrapolation of which yielded an ADA earth value of about 36%. Now let's take a closer look at this close in population. This is a contour plot of the reliable Kepler planets with the yellow regions being relatively more dense than the purple ones. The most prominent feature here is the radius valley at about 1.6 earth radi. Essentially the radius valley causes this close in small planet population to have a bi-modal distribution with the first peak at about 1.3 earth radi known as the super earths and the second peak at about 2.4 earth radio also known as the sub-Neptunes. Now this radius valley feature is thought to be evolutionary with possible explanations being XUV photoevaporation and core-powered mass loss. Since this is evolutionary, it could be the planets that initially formed as sub-Neptunes could have been stripped off their atmospheres and now mimic Earth-sized planets. In that case the population of small planets that we are trying to fit an extrapolate in order to calculate ADA earth is thought to have been contaminated by these planets that didn't really form like earth. A 2019 study led by Laria Pascucci aimed to understand how these closed in stripped cores affect our estimation of ADA earth. Since the effects of photoevaporation is more strongly felt by closer in planets than further out, they first only fit the planets beyond 10 days and got an ADA earth value of about 10%. They then did the same for planets beyond 25 days and got a value of about 5%, which is a four to eight time drop from the 36% that we get from fitting the entire population. This implies that the population of small short period planets may be contaminated by the stripped cores of sub-Neptunes, and hence is not representative of the planets that formed like earth and should not be used to calculate ADA earth. So now we're faced with this question, how do we quantify the contamination by the stripped cores of once sub-Neptunes? Sorry, with the test mission we now have the unique opportunity to detect planets around young stars in clusters and associations, thereby providing a sample much closer in time to the primordial short period planet population. For those unfamiliar, during its primary mission, test monitored each strip of the sky for approximately 27 days, which means that it can detect planets with a period out to about 14 days that is assuming two transits per candidate. Now there were about 200,000 bright nearby stars that were preselected for the two year primary mission to be observed in a short two minute cadence mode. In addition, test also obtained images of each sector at a 30 minute cadence also known as full frame images or FFIs. In fact, as you can see in this flawed by Trevor David from 2019, young planets detected by K2 and tests have already been done to populate the gaps in the Kepler distribution. So far only a handful of transiting young planets have been discovered compared to the thousands of giga year old planets. So there is a need to detect more such young transiting planets in order to have a better understanding of the ways in which these planets evolve with time. Now, in order to detect planets in these young clusters and to calculate the occurrence rate, I built a pipeline named pterodactyls that builds on a publicly available packages that are customized to find planets around young stars. Pterodactyls has five main major components. First we extract light curves from the test full frame images or 30 minute cadence data. We then detrend these light curves using a field lifeline, which I will go into more detail in the next few slides. Next we use transit lease squares or TLS in order to search for and do a preliminary fit for the candidates, which are then vetted by EDI-Vetter and triceratops. And lastly, we use a multi-ness approach via exotic in order to better fit for the transit parameters. In order to develop and test pterodactyls, we limited our search to only those young clusters in which tests has already found transiting planets during its primary mission, that is the Tucana-Horologium Association, IC2602, Upper Centaurus Lupus, Ursa Major, and the Pisces-Eridani moving group. But these clusters and associations, we relied on the memberships provided in the BANYAN Sigma and Gaia-Tycho open cluster member list, which gives us a total of about 2000 stars, 600 of which were observed during the test primary mission. Now, just a quick note, most of these planets were initially detected in two minute data. My goal over here is to use pterodactyls to recover the same planets from the test FFIs. Now in order to extract the light curves from the FFIs, we use Eleanor, specifically we use DSF model light curves since they better enable characterization of small signals in crowded fields and minimize the effects of scattered lights from the earth and the moon. As you can see here, the light curves of young stars are notoriously known to be highly variable. Their variability makes them extremely tricky to detrend. After a lot of testing, we found that a penalized spline with knot optimization based on stellar rotation rate worked best for our sample. Now, a penalize spline is essentially a piecewise polynomial fit where the number of pieces is determined by the rotation rate. Fast rotators are harder to detrend and hence will need more splines or more pieces than relatively slower takers. As you can see, by using this kind of spline, we were able to produce flat light curves that are optimized for transit searches. Once we have the detrended light curve, we make use of transit least squares or TLS in order to search for periodic transit-like features. TLS is optimized for signal detection of small planets by taking into account the effects of limb darkening to better model the ingress and egress shape. We implement an SDE and an SNR cut of seven to further reduce any transit searches that are triggered due to any detrending artifacts. While the penalized spline generally does a good job flattening light curves, we occasionally still see features in the light curves that could be picked up by TLS as a candidate. In order to filter out any false positives that could be due to detrending artifacts, we use criteria inspired by Kepler Robovetter and Caitlin's EDI-Vetter in order to create this six checks to address the issues unique to our sample. Well, we weren't able to detect any new planets, pterodactyls was able to recover eight out of the 10 planets that were detected in these clusters. The two planets that we weren't able to detect were understandable because one of them was a single transit event, which was hard to detect in a blind search. And the other one was an extremely small planet, less than two worth radius which was below our detection limits. All of these previously detected planets have passed all of our vetting tests. The top thoughts here show the progression of the light curves through pterodactyls. In blue is the light curve that we extracted using Eleanor, that is the PSF light curve. The red line represents the trend line that is divided out during the detrending process in order to give us the flat light curves that are in green, which we're able to search for planets whose transits are depicted using the black dash lines. In addition, pterodactyls also has the capability to detect multi-planet systems such as the one in Pisces-Eridani. We do this by masking the transit locations of the first planet in the pre detrended likers, and then reprocessing it to pterodactyls. This way we have been able to recover both of the multi planet systems in out of these eight planets, out of the 10 planet, sorry. We also champiated the detection efficiency of our pipeline using injection recovery tests in which we include the flux contamination of each star as calculated by triceratops. We find that the overall detection efficiency doesn't exceed 50% for anything and that all of the recovered planets that are denoted by blue stars are in vinse with pretty low values. Just to put this plot into context, an RP over to our star of about 0.1 is a Jupiter sized planet around the solar radius star. Now, in order to understand this low detection efficiency better, we split our sample into our different clusters and are only highlighting here the two extremes. On the top we have the detection efficiency of THA which is higher than our overall detection efficiency, which makes sense given that THA is a moving group where the stars are relatively well spread out and have less noisy light curves. As a result, you can see that we have pretty good detection efficiency. The other extreme is IC2602 which has extremely low detection efficiency with only one been barely going over 20%. And this makes sense because IC2602 is a very crowded field and also has extremely variable stars along the very noisy likers, which makes sense, I mean, which propagates during the detection efficiency. - [Josh] About two minutes left. - Okay. Thank you. So for our next steps, the plan is to build upon the detection efficiency and provide an occurrence rates. Sorry, let me take a step back. The next step is to search and vet the planet candidates in the nearby clusters and moving groups. So far we have about 20 more clusters that we plan on looking on. And if we find any planet candidates, we plan on handing them over to the community for follow-up. Given that the best way to understand what kind of planets we are detecting we need to characterize the stars. So we're also working on a uniform characterization of stars in all of these young clusters in order to provide even better radius estimates. And finally, we plan on providing the community with an occurrence of short period of planets in these young stellar clusters with a possible bifurcation between young star clusters and moving groups given that young stellar clusters are more crowded, whereas moving groups tend to have a higher detection efficiency given that they're most spread out. With that I will leave my summary slide up and take any questions. - Great, thank you very much, Rachel. This is really great stuff. Let's take a look at the questions. So no questions on the shared screen, but let's ask about this process where you are analyzing the FFI data. Have you experimented yet with the faster cadence FFI data that's now available having gone from 30 minutes to 10 minutes? - So when I first started this project, it was about two years ago and that's where only the 30 minute cadence data was available. For our next step we plan on experimenting and checking how well the default version of the pipeline will do with 10 minute cadence data. And then if it's not too much, then the switch from 30 minute to 10 cadence data's almost an obvious step. So we plan on doing that in the future. The only small worry that I would have is that since we're going for occurrence rates using two different cadences and not having a properly uniform sample in terms of cadence would kind of be problematic, but this could easily be overcome if we just, for example, focus on the 10 minutes result which has already been done. - Okay, great. And I think there's other groups that have been examining and trying to extract light curves of crowded regions and clusters within tests. Off the top of my head I remember projects like sea deeps and pathers. I was curious if you've sort of compared your light curves of these heavily blended cluster stars to those of other groups to see what methods seem to be most capable of extracting the crowded fields photometry. - Well, we haven't directly compared the light curves themselves yet, what we have done is in this bonus slide over here. Sorry, what we have done in this bonus slide over here is compared all of the planetary parameters that we do get from their light curves versus ours. And as you can see on average, like the periods, we're doing a pretty great job of getting like on the one-to-one line, whereas on the radi we do tend to underestimate for a few of them, but overall, when you take into account errors, we're pretty close to the one-to-one lines. So it seems like switching over or testing a whole different like of extraction would be a little bit of an overkill just because of how close our results are to theirs. - That makes sense. So actually in the last couple of minutes, a couple more questions have popped up on the Q&A, but I think we have to move on to the next talk. So maybe if you want to, you can respond to some of those questions in the slack channel. - Right. Thank you, Josh. - Wonderful. Thank you very much, Rachel