Wednesday, September 8, 2010

Not just hopefully, its truly working

I am 90% certain that now I am onto just computing through clusters. Everything seems to be working very well. The parameters are great visual fits. Some masses I am getting almost too accurately (.05 solar masses), some not very accurately (1 solar mass), but that seems determined by the accuracy of the age determination, which relies on the photometric error and quality of the data.
Most of the time I know the mass to about +- .2 solar masses.

Saturday, September 4, 2010

Q Correlation fixed

So I first thought it would be really tricky to factor in the correlation between b and v for a given q. Thanks to a nice shower, I have now realized it actually reduces the number of calculations I would need to do each round. I pulled the probability of error in V (B respectively) from a vector of values. I have merely changed that vector into a matrix. The element I choose from the matrix is determined by both the error in V and the error in B. Simple. Problem solved... hopefully

Friday, September 3, 2010

Bit more confusing than i hoped

Partly I have not accomplished a lot this week because of the fickle nature of my program. When I first got the griding working, it looked perfect. Then I went to sleep and woke up the next day to start applying it.

For clusters with larger photometric errors, younger ages were always more favorable according to an exponential like distribution. The way I factor in the binary distribution allows for the program to think a red giant is a really noisy main sequence star. The blue error and visual error are uncorrelated in my program. For an actual binary star, it would increase in magnitude for both filters, not just one. A binary companion would not provide a blue magnitude boost while not emitting any visual light.

Thinking about it now, I should actually take that correlation into account when computing probabilities somehow. Might be tricky though. I would have to calculate the most likely q for each star, then compute the probability for each star. This would slow things down a lot. I would get some useful binary statistics out of it though.

From these problems, I've decided to run both methods on each cluster and use the one which seems more reasonable.

To control the Cross Entropy/ Bootstrapping Method, I am going to take prior information more into account. If part of the distribution is not a remotely conceivable visual fit, I will eliminate it. Also, sometimes a point does not seem grouped in the overall distribution and appears as some background noise. So to eliminate that I am discrediting all ages that only have one hit.

I am also going to start taking prior information into account when calculating the mass probability distribution. I know the star is a red giant and not a really noisy main sequence star.

The tech people finally figured out why I didn't have access to gio (separate account), and as gio is many times faster I can progress much more rapidly now. Hopefully. Always only hopefully.

I feel like this project is chasing me in circles. But I guess I don't know what is the best way until I try many possible ways. There is no simple solution that I can attain on the first try. There is no reason to have expected this to be easily accomplished. These facts are things I conceptually heard before but never truly understood as much as I do now.

PS- If I don't get this all done in a week I'll finish it up at home.

Tuesday, August 31, 2010

New gridding adventures

To contrast the performance of the Cross Entropy method, I modified it slightly to brute compute on a 30x30x30 parameter grid with solar metalicity. Surprisingly enough, it isn't that much slower. Running times are actually roughly comparable, and the gridding method isn't subject to random noise. I could use the CE method to get an age estimate when there isn't any previous estimate off which to choose my grid, but switch to gridding to get the PDF.

Well, I was getting delta functions again and way too large -log likelihoods. But now writing about this I realized that I was marginalizing over the other parameters before exponenting the -log likelihood. So now its turning out quite nicely.

I guess previous problems put me off the gridding method, but now I see that it is probably the way to go. The one problem is that I can't use U-B data without doing a for-loop to fit for E(U-B). IDL couldn't handle the size of array necessary to account for four variables.

So tomorrow, I will start running everything through this and test the effect of binary percentage. I think I'll start working on my final paper and presentation too while waiting for things to run.

Monday, August 30, 2010

Stellar Oscillations in Planet-hosting Giant Stars

Artie P. Hatzes and Mathias Zechmeister
2008 Proceedings of the Second HELAS International Conference

Hatzes and Zechmeister performed very thorough observations of HD13189, Beta Gem, and Iota Dra, stars with confirmed exo-planets, to better characterize the jitter and oscillation modes of the stars. They compared their results with formulas from Kjeldsen & Bedding, and they were consistent for Beta Gem and Iota Dra, but not for HD13189. The discrepency in HD13189 is thought to arise from the predictive power of Kjeldsen & Bedding's formulas primarily in higher order modes, where HD13189 is pulsating in approximately the second mode. They also mention the possibility of constraining stellar masses based on the properties of oscillation modes.

HD13189 also has a very large scatter of 50 m/s, and its mass is also imprecise to the range of 1-7 solar masses. But it has a predicted jitter amplitude of 500 m/s, it actually smaller than predicted.

Other interesting facts in the article: it took 26 years of observations to confirm the planet around Beta Gem. The mass estimate also started out 2.8, but then it got switched to 1.7.

Wednesday, August 25, 2010

Stellar Mass Distributions


So now I have a successful code to compute the PDF and CDF for a star's mass in an open cluster. I have uploaded the results for NGC2099 star no 108. It has an expected mass of 1.97 and a 90% confidence level of [1.7 , 2.26], a 0.56 mass range. The mean mass is 0.3 lower than what I roughly had, but WEBDA has an open cluster log age of 8.85, instead of the 9 or so that I got.

So it turns out the mass grid for each parameter set is different, so I had to round to a standard mass grid with 1/30 solar mass intervals in order to marginalize over all the parameters. When multiple points on the old mass grid rounded to the new point, I would then take the average of all the probabilities.

I am actually comtemplating retrying the gridding method using my new additions for binary population, IMF, etc. With a rough enough grid, it might actually be only less than an order of magnitude worse. And it wouldn't be subject to the random pertubations of CE and the underestimation of errors of bootstrapping.

For the next two and a half weeks, I only need to run the procedure through all the clusters and stars, and rework a couple of things in the survey optimization. I also want to determine what would be the most efficient amount of time spent on this survey/ at what time would getting more objects decrease the quality of the objects too much. I wonder what is the overall average rate of planet discovery per telescope time.

Tuesday, August 24, 2010

Errors and the Bootstrap Method

So, I had to fiddle a little with the normal code again because it started acting up, changing ES parameters I think, and checked to make sure it worked on some other clusters.

Then I wrote a procedure to apply the boostrap method to the Cross Entropy. Shown is the histogram for the age. Interesting how some become just completely wrong. So I fitted for the parameters 50 times, each time having scrambled the residuals.

All this didn't actually turn out as efficiently time wise as I would have hoped, but it works.