Friday, July 30, 2010

Survey Update
Stars:64
Clusters: 20

Thursday, July 29, 2010

Two Sato et al papers

Sato et al. is a group of scientists in Japan looking at evolved intermediate mass stars, and discuss six new discoveries over two papers. They also include some interesting speculation as to the less than 0.6 AU planet desert and the apparent lack of a metallicity correlation for intermediate mass stars. For post-red giant tip red giants, they could have actually engulfed the planets, but that does not explain the lack of inner planets around sub-giant stars. A different formation mechanism may be at work.

Important for my research, they include jitter values. As I previously assumed, the jitter seemed around 15 m/s. 10 m/s was the norm for the sub-giants they used, but the full red giants had more jitter. They had one at 25 m/s, but it displayed chromosphere activity in the Ca II & K line analysis.

Wednesday, July 28, 2010

BEST RESULTS!!!

Pretty sure this is correct now.....

4 observations each of 100 stars gives 18.82 planets

Seems reasonable, and should be returning right about how many I assigned :)

Better Results

So when I left last night I set the Monte Carlo simulated annealing program to run for 5000 iterations instead of 1000. Now the optimum is even better
.....
6 observations of 96 stars gives 10.82 planets.

I did get "arithmetic error: floating underflow" and "arithmetic error: Floating illegal operand", but the results seem legitimate.

Except, yeah, none of this actually practically works. I didn't set a lower limit on Signal to Noise, so its at a completely unpractical low level. Um... I'll write in a lower limit and run it again. That's probably why its actually returning 10% planets instead of 20%.

Except, yeah, I just spent an hour or so trying to figure out what was wrong with my instrument noise formula, and I had .01 as a constant instead of .002. Yeah, who knows how many times I recalculated it and kept getting it wrong. Works now :) Will run another simulation of 5000 now and see what happens :)

Tuesday, July 27, 2010

RESULTS!!!!!!!!!!!!!! :) :) :) :) :) :) :) :) :) :) :) :) :)

FINALLY!!!!!!!!!!!
THE MONTE CARLO SIMULATION RETURNED LEGITIMATE RESULTS!!!! MORE CAPSLOCK!!!!!!


It ended on.......
drumroll......

26 observations of 68 stars yielding an average of 6.2 planets.

Actually a couple earlier 24 observations of 68 stars yielded 7.02 planets, which is the best of the 985 accepted new points. I think I need to tighten the acceptance probability at the end and also run more iterations, since it still jumped around a bit near the end. Also, 986 times it accepted new points out of the thousand iterations? Definitely tightening acceptance criterion/ running more iterations.

I'll run it for 5000 iterations right now, and check back after dinner!!!!!!

A Jupiter-Mass Companion to a Solar Mass Star

Queloz and Mayor 1995

Annoyed as I was at the de-bugging process, I decided to take a break while my Monte-Carlo code ran and read Queloz and Mayor's historic Nature paper about 51 Pegasi b.

The paper is actually quite exciting. These five little pages were the start of this entire field. The "intensive observations during eight consecutive nights" sounds very exciting for all the technical language in the paper. I can only imagine what it might have felt like to be at that discovery.

The tone of the paper is different then other planet papers I've read. They definitely expected a lot of opposition, which I guess they got. Each objection seems to be carefully thought out and analyzed. Now I assume there is a specific process to eliminate all sources of mistakes, but they got to come up with it all themselves. Quite fun.

In the details, I liked learning that giant pulsating stars pulsate simultaneously in many short-period modes. I guess that does make sense considering what I'm working through in Carroll and Ostlie on stellar pulsation.

Monday, July 26, 2010

Some Results! Statistical distinctiveness


disclaimer: using 30 random observations over 1000 days and constant 2.5 solar mass

0.18 planeticity (predicted for 2.5 mass solar metallicity stars) is inconsistent with zero above 20 sampled stars. :) The prediction is 3.58 planets with 1.63 standard deviation. Great!

But, only at 100 stars does 0.18 planeticity start being inconsistent with 0.08 planeticity. I think it will only be by combining this survey with other work that we will be able to say for sure that intermediate mass stars do have a larger planeticity :( As shown in the graph, a survey drawn from the 18% distribution could easily fall in the 8% distribution area.
On the bright side, my average mass might end up being greater than 2.5, so the planeticity will be greater and it will be easier to tell them apart. Also, standard low mass stars are populated by a different population of planets which would probably be unobservable around intermediate mass red giants. 100 good targets will probably be an upper limit for how many good targets I can find (maybe), and I don't now how time will limit it.

I did approximate the multiple survey standard deviation as coming from a Gaussian, which might be wrong. It kinda looks triangular shaped. I don't know how to save IDL graphs so I can't really show an example in the blog. Anyway, I used the criterion:
| nplanets(0.18)- nplanets(0.08)| > 2 * sqrt( sigma(0.18)^2+ sigma(0.08)^2 )
to determine distinctiveness.
Both the average and the standard deviations were pulled from 100 trial surveys.

So the assigning planet program has been altered to actually pull from the mass law (Making my Own Distribution Functions) and a uniform distribution for radius over 0.6 to 3 AU. I struggled with algebraic manipulation and units for a while, but now it works quite fine. Also, returnability is calculated by RMS > 3 *noise instead of seeing if rv_fit_mp returns the same parameters. This is much more rapid and efficient. 4 seconds for 100 surveys, so I can run 100 surveys per step and still get many more steps in. Also, since we probably won't actually have incredibly much control over observing schedule beyond how many observations, observing dates are now completely randomized.

Goals for tomorrow:
1. import in array of magnitudes and masses from current available stars
2. calculate noise
3. Make sure MC parameters and time scales are good
3. Start running MC program AND SEE WHAT HAPPENS!!!!

Friday, July 23, 2010

Turtle wins the race? eventually...

I tried reading some books in the library on the Monte Carlo method, trying to find descriptions of simulated annealing. While I didn't understand a lot of it, I did get:
the probability of accepting a worse point is e^(-diff/temperature parameter). that helped.

I have also added in a permutation of the time each round. The code also prints the results each round.

I have also added in the "temperature" that controls acceptance of worse results. Some simulated annealing tutorial said Tnew= alpha * Told was a common annealing schedule, so I decided to adopt that. I would like to research that choice a bit more though. I set alpha as .998 so that it could still move up half way through a 1000 long run, but probably not accept anything near the end.

I decided a 1000 long run based on time. One survey takes about 15 seconds. If I average five surveys per point, then leave about 15 seconds to calculate new point, etc., then 1000 iterations would take 21 hours. I'd like to get it more efficient than that, but I don't know how.

I still need to calculate noise. I also need to restrict time to under 10 hours per semester some how. I think I'll need to keep the time from having two of the same dates some how too.
At least my to-do list is slowly shrinking. I still am trying to read some difficult papers/ books on this subject and experiment optimization.

One paper I'm reading is talking about optimizing future optimizations after 3 or more velocities have been taken. If I could set up that framework now, I'd like to try to implement that into the latter part of the survey if at all possible.


Survey Update
one 9th mag, 2.82 solar mass star in NGC2527 could be added, but SIMBAD doesn't call it a "star in a cluster" like most other stars in clusters

PROGRESS!, well sort of....

Even though I still don't understand much of the papers and will continue to wade through them to understand optimum experimental design (involving completely new ideas like Shannon information and Utility functions), I did manage to make some progress on my Monte Carlo code yesterday. I read in Ford 2008 that he just used a Jeffrey's prior P(x)=1/x, since this represents maximum ignorance about the scaling of a parameter. So I adopted a Jeffry's prior to period and velocity semi-amplitude. I added a uniform distribution for eccentricity, though I want to figure out something better than that. I do not know if a Jeffrey's prior would be right for eccentricity since it is not a scaling.

I have finished (other than the concerns expressed above) writing a "Survey" program and all the things inside it, and am working on the actual body of the Monte Carlo code. That is the progress part of yesterday. I have multiple problems that I am trying to figure out in this. First, the neighbor function/ getting each new point.

Time presents the worst problems. The way I have it right now, time gets changed with the number of observations. Number of observations goes down, dates get randomly eliminated. Number of observations goes up, dates get randomly added. I don't like the connection between optimizing both parameters. Also, it doesn't factor in not being able to observe certain objects for parts of the year. And its the same for all stars. And I have no safeties in place to protect from getting two of the same times into the time vector. And I have no constraints on having time in each semester.

Second problem is noise. Firstly, its dependent on what i do for time above. Second, when I add more stars, I'll have to include dimmer stars, and the average exposure time goes up for the same signal to noise ratio. Do I need to include that and I how do I do so efficiently? Another problem, I do not know how to translate from signal to noise ratio to instrumental uncertainty in m/s.

In the overall category, do I just change one parameter at a time, or all every iteration?

Even though I know the overall theory of the method, I don't actually know how to accomplish specific parts. For example, how should I change the probability acceptance parameter with number of iterations? Does the probability acceptance depend linearly on difference in planets between two points?

Also, do I need to be concerned about efficiency? Running one survey of 100 stars takes me about a minute to run. If I run ten thousand steps in the MC method, that not only includes running the survey each but calculating each parameter to input into the survey, that is a very long time. I don't understand why a survey is an entire minute. Probably stupid FOR loop that I can't eliminate.

Though I am definitely challenged on this problem, I feel like I am learning a lot and gaining an extremely valuable skill. Back to paper reading :)

Survey Update
Two ninth magnitude 3 solar mass stars in NGC2458 have been added. Five others in the same cluster were eliminated for one reason or another.

Wednesday, July 21, 2010

Nevermind the meaningless blather

So, after trying so hard to figure out the distribution functions, I realized this:

My program could accomplish one of two goals:
1. Tell us what we are going to see
2. Tell us how efficient we will be at detecting what is actually there

If I wanted to predict what the survey will return, I would need the distribution functions like I have been trying, but if I change the goal to merely assess the efficiency of the survey, I should just use a uniform distribution function. That way I can tell how the distribution function changes after experiencing the noise and detection. Maybe later I may run the simulations to predict what we will actually see, but for now I will continue writing my code the simpler way.

Later: Meh, that itself was just substance-less blather... If I am just going for efficiency, then the code will probably just tell me to look at a few things really well. Number of planets is the one way to strike a trade off between efficiency and actually managing to detect a wide number of planets. How would I factor size of sample in to Goal 2?

Further complicating the Goal 1 way to go about things, the distributions that I got yesterday were post-biasing. I need a pre-biasing distribution so I can then bias it and see what happens. But I don't know the parameters of each individual survey to get a "bias surface" to un-bias the surveys.
I don't even know if my weird concept of a "bias surface" exists. It would give the percent observed out of percent present. So simple conversion from the post-biasing distribution function would give the pre-bias distribution surface. Probably requires many more data points than those available.

OH OH!!! Just found paper on survey optimization. Hopefully that will solve my problems.

Survey Update
M44: 1 new red subgiant at 6.68 mag and 2.2 solar masses (4 others eliminated)
NGC6404: 4 9th magnitude possibilities

Tuesday, July 20, 2010

Making my Own Distribution Functions






The closest thing I found to describing the distribution functions for mass and period around intermediate mass stars was Bowler et al, which only said what the distribution function was not. Therefore, I decided to take matters into my own hands and examine the periods and masses for all stars with masses greater than 1.5 solar masses.

For the 39 data points, the first thing I saw was that mass was mostly a power law. Two lower mass planets at .6 and 1.2 Jupiter masses I ignored, since there were several 1.6 masses with it decaying from there. When I had 1.59 Jupiter masses as the xmin, it fit quite nicely, see figure. I would like to study this more and actually mathematically find the optimum xmin, not this guessing that I did.
Mass Distribution Function:
P(x)=1.3917*x^(-1.9118)



Now the semi-major axis (which translates to period) is more complicated and more sketchy approximation. I threw out the direct imaging stars (Beta Pic, Formalhaut, HR8799) since those were at massively larger semi-major axes and no doubt coming from different distributions. The planet desert less than 0.6 AU was definitely obvious, and planets also seemed to pile up right outside the 0.6 AU boundary. After taking out the 0.6 AU pile-up, the rest of the data points looked Gaussian to me, so I fit them with a Gaussian.
Non-Pile-Up Semi-Major Axis Parameters
Mean- 1.6772 AU
Standard Deviation:0.5149

Retired A Stars And Their Companions. III. Comparing The Mass-Period Distributions Of Planets Around A-Type Stars And Sun-Like Stars

Bowler et al.
The Astrophysical Journal. January 20, 2010

After the introduction and background about many interesting things I need to read more about later, Bowler et al gives updated information about the five stars already shown to possess planets in a 159 star survey. They applied a Bayesian model comparison test to decide whether or not to add in a constant acceleration due to a binary companion (BIC=-2 ln(Lmax) + k ln(N), lower BIC preferred). Parameter uncertainties were estimated using a Markov Chain Monte Carlo algorithm. Even though I understand what they wrote as the process, I am unsure of how they get the actual uncertainties from the chain of steps. Lomb-Scargle periodograms were used to search for additional short-period companions. They also studied all five stars with photometry over an extended time period to tell if the periodicity was a result of stellar activity or if the planet transited.

One of the most interesting parts of the paper for me was when they discussed the Detection Limits. They synthesized fake orbits and determined the highest K that would be detectable with the given noise. In addition to testing the null planets, they tested those with planets for additional ones. Even though they said they could detect less than Saturn mass planets closer in, most of the noise was from the star so detection could not improve much more. The only chance of finding Neptune sized planets would be from transits or micro lensing. Even with the noise, the finding of higher mass planets was probably the result of a real trend.

For this small sample of stars, they compared the statistics to those for stellar-type stars. Approximately 24% of the stars have planets, consistent with other predictions to more planets around higher mass stars. They also tested whether or not the planets could be drawn from the same mass, period distribution as sun-like stars. When including planet occurrence rates, it was less than a 10^-5 probability that they were the same. Even when they accepted different planet occurrence rates, they ruled out them being drawn from the same mass-period distribution with confidence over 4 sigma. Even though they had too few data points to actually find the parameters, they did rule out several regions of the parameter space. There is also a low eccentricity trend for planets around IM subgiants that differs with normal stars.

Monday, July 19, 2010

Extrasolar planets around intermediate mass stars

A P Hatzes
Phys. Scr. 2008

Hatzes begins by reasserting the possibility of a planet around Aldeberan, alpha Tauri. It was one of the first prospective planets along with Pollux and Arcturus (Beta Gem and Alpha Boo). Pollux was later confirmed to have a planet, but neither of the others has been confirmed. If Aldrberan had a planet, it would have a 645 day orbit and M sini = 9.5 Jupiter masses.

Solving the problem of not knowing red giant's masses reliably, they calculated Aldeberan's mass using interferometric measurements of the radius and spectroscopic measurements of the surface gravity. Another method that can be used to determine stellar mass is that usually nasty stellar oscillations. Stellar oscillations are still extremely nasty, and Hatzes quoted the oscillation of a fundemental radial mode with intensity of 150m/s and said that giant stars can have intrinsic variability up to 100m/s.

In 2008 when Hatzes wrote this paper, there had been 20 extrasolar planets around giant or sub-giant stars, enough that he analyzed the trends. They exist in a limited range of period from 160-1000 days, with 1000 days appearing probably from the length of contemporary surveys. Inner planets are probably eliminated either by tthe expansion of the star's radius or new instabilities. 60% of the planets had masses over 5 Mjup, in comparison to less than 20% for normal stars. More massive star means more massive planet. He also found that the distribution of orbital eccentricities is independent of stellar mass. As he had fewer data points, he stated that a uniform planeticity-metallicity distribution was consistent with the data and that many of the planet hosting giant stars are actually metal poor.

Still unable to determine what parameters mean

So in attempting to write code to randomly draw out the parameters, I have to know what each parameter is. I am still unable to decide what gamma and dvdt mean in the output of rv_fit_mp and also clueless as to the identity of cmvel and om are for rv_drive. I'm guessing om means omega though.

Still searching through literature to get good distribution functions to use.
Once I get these problems finished though, it should all come together smoothly.

Planning to Plan for the Survey: Optimization

In order to run the survey most efficiently, I need to run Monte Carlo simulations of thousands of potential observing runs to find the best distribution of time, stars, masses, and signal to noise. Compounded with this, if I run the survey most efficiently, what is the probability I will be able to conclude anything from the results?

Monte Carlo method works based on a modification of a sort of random walk. From an initial guess of parameters, a step is randomly generated according to a Gaussian (or other type of) distribution function. The step is automatically excepted if the survey efficiency (or other metric) is higher. If the survey efficiency is lower, then the point is accepted with a probability proportional to the difference is efficiencies and some "temperature". The "temperature" decreases through out the simulation so that the point goes to the nearest extrema at the end of the simulation.

So inside this method, I need to calculate some survey efficiency at each point given the specific set of parameters. This happens by running many, many simulated surveys and either seeing the average number of planets measured or the number of planets measured out of those present. I think the survey should be run for both methods of efficiency measurement.

For each star within a survey, predicted probabilities will govern the random assignment of a planet to a star. If a star has a planet, more predictions will randomly assign the planet a mass and orbit. Noise will be applied to the respective RV curve, and the computer will try to recover the original planet.

While that is the overall picture, right now I need to read through the literature to discover what the theoretical distributions are. Also, I need to figure out how to randomly draw numbers from a power law distribution in IDL. IDL can give out numbers with various Gaussian, Poisson, flat, binomial, etc. distributions, but no power law that I can find. I have a theory about how to convert flat distributions to power law, but I am not entirely certain about it. (Never mind I found randomp which gives a power law distribution)

Thursday, July 15, 2010

Literature Background: A Post from Yesterday

Yesterday I got distracted then fell asleep before finishing up after dinner, so here is what I did yesterday:

I read/ scanned papers looking for references to any of my chosen stars that would reveal them to be either a non-member, variable star, or spectroscopic binary. This eliminated many stars, but actually added a bunch as well. For NGC6134, Mermilliod and Mayor were examining twenty red giants, not just the three I had extracted. The CCD data I had been looking at was incomplete, so I found many more red giants when I looked the plain UBV tab in WEBDA. Also, I realized one cluster I thought wasn't up in late July actually was. All the stars in one cluster got eliminated for one reason or another.

I also added a new cluster, NGC2281 with five red giants.

Today I will add at least two two clusters and assign priority numbers for all stars.

Survey Total
Clusters: 12
Stars: 52

Wednesday, July 14, 2010

Observing List Compiled and Four New Clusters

Part of the morning was involved in a follow up on the photometry of FGD459 from yesterday. The systematic error between my measurements was never completely explained.

I have analyzed four new clusters, yet one needed to be dropped since I forgot to check if it was visible in July. One cluster has a low metallicity of -0.16 which reduces the planeticity, but other aspects of the cluster make it good for the survey. Another cluster's stars have masses of 4.5 solar masses. This is the largest I've analyzed yet by about an entire solar mass. Lovis and Mayor did include stars in this range, so I think they are legitimate to include.

I also transferred all necessary data into an observing list format. To complete this, I also had to calculate exposure times. I am uncertain as to the correctness of my method to calculate exposure time.

The Handbook of CCD Astronomy says that, for high values, the signal to noise ratio is dominated by error from source and is proportional to the square root of the number of pixels received. This follows from the incoming photons having a Poisson distribution. The number of photons is the flux times the exposure time. Flux can be expressed as 10^((m-48.6)/2.5) ergs/sec, with m in the AB system. AB magnitudes are only slightly off from the Johnson system, so I just used the V magnitude listed. The conversion from ergs/sec to photons/sec I just absorbed into the constant of proportionality. These all give the formula
S/N = C * 10^((m-48.6)/5) * sqrt(t)
where C is a constant of 4.4e12 determined from the rule of thumb "A 8th mag star takes 90s to get S/N=200".

I set the signal to noise ratio equal to 90, solved for t, and rounded up to the nearest convenient number. For stars dimmer than 12th mag, this gives times in excess of a thousand seconds. This seems an inefficient use of time, so those clusters should go to lower priority values.

Tomorrow I intend to search through the literature to find any possible problems with the new prospective stars to eliminate variables, non-members, spectroscopic binaries, etc.

Current Survey Total
Stars: 37
Clusters: 11

Monday, July 12, 2010

Stellar Photometry

Most of the trouble with stellar photometry arose from logging on to the right place, getting the data, and opening atv. Once those got worked through (I still do not completely understand the mysteries of how computers choose to operate), the rest of the work was fairly simple repetition. Even though all my values were internally consistent, there was an offset from Mary's results using a different calibration star. We will examine this discrepancy to a greater extent tomorrow.

I have also added a new cluster, NGC 1496, with three stars around 2.4-2.7 solar masses. New total= 24 stars.

I also read a fascinating article in Astronomy magazine about the possibility of viewing exo-zodiacal light. Written by Kuchner and Stark at the Goddard Space Flight Center, they observed 51 Ophiuchi with the Keck Interferometer Nuller and were able to distinguish between an inner ring of an asteroid belt/ exo-zodiacal dust and an outer Kuiper belt like region. They are also running simulations of the interactions of zodiacal dust and a planet. If our optics systems ever improve to the capability of directly imaging an Earth size planet, the zodiacal dust could just wash it out. On the positive side though, the planet may induce perturbations in the dust that might demonstrate the existence of a planet.

And its a small world after all- there was also an news clip about six planets that orbit contrary to the star. Another news short discussed the lack of methane on GJ436b.

Tomorrow I have a goal of collecting at least three new clusters and calculating all the exposure times.

Friday, July 9, 2010

GOOD VALUES RETURNED!!!!!





After much modification, my MATLAB code now returns the following parameters for M37, NGC 2099:
Age= 440 million years
m-M=11.48
E(B-V)=.25
These values are in good agreement with published parameters. Now that I know the place I am looking for better, I tightened my prior and use a finer grid close to the peak.
Even though the PDF's and results look great, it doesn't look as good a visual fit as one could possible be. I may be just that I am paying attention to how well it fits in the red giant area, which is easier to visually judge. Now that everything else is working, I will modify the standard deviation for each star to account for its position and give more weight to red giants and those on the cut off than those on the main sequence.

I first got my code to work before lunch, but after making the age grid finer it stopped working. I somehow managed to not correctly transfer the new array of isochrone values. Other problems I encountered include rounding to zero for the probability and gaining -Inf for tau squared (solved by adding a very small positive number) and the mysterious Nan 's (eliminated by deleting Nan 's that somehow showed up in my star data). I took out stars obviously not on the isochrone from the data set as well.

Today I also learned that the UBV measurements taken by CCD cameras are on a different link than plain UBV from photographs. Hence I need to change the data I have been working with for all the clusters. It probably does not change the preliminary mass estimates much, but for fitting the cluster later I will need to use the CCD data, since it is more accurate.

I also need to study whether or not it is possible to fit for metallicity. Metallicity does slightly change the isochrone, so I need to determine whether or not it is discernible in a fit.

I have also added three new stars in NGC 6134 at mass 2.14 solar masses. This brings the total to 21 viable stars. For the next few days I will set aside the fitting to focus on getting more stars and calculating priorities and exposure times.

Thursday, July 8, 2010

C++ Code

Today I wrote code in C++ to see if it performed differently then what I was doing in matlab. I actually took more time to compute. I believe this is because it not only has to go through the large arrays that are getting imputed, but perform the marginalization as well before I can get data out. I ended up having to just modify my arrays in excel to add brackets and commas and then copy and paste them directly into the code since I could not figure out how to read a vector let alone a two dimensional array into C++ code.
After marginalizing to get a least squares for both time and distance modulus, they were both disparate from the published data and what looks like a good fit. So I have the code, and it computes, but it needs work still.

Wednesday, July 7, 2010

HR fitting for three parameters and new cluster

I have chosen two new stars in NGC 1528. They are both completely cluster members and around mass 3.1 solar masses. They have unidentified metalicities.

In the arena of chosen stars, I need to find metallicities for more of the clusters, or figure out how to calculate it. I also need to figure out how to calculate necessary exposure times. I want to conceptually work through the process of figuring it out instead of just looking it up, so that will take some thinking. I will continue to add new clusters to the list.

I also worked on extending the least squares fitting to cover distance, age, and reddening. Yesterday I managed to correct my distance fitting mechanism. What I had been doing previously was ignoring algebraic rules and I redid the code correctly and it worked very well. Today I devised a way to compile many age variables into one multidimensional array to use in the fitting process. This allowed me to successfully fit for both distance and age when keeping reddening fixed at the know value of 0.227 . Sadly when I extended it to cover all three variables, distance, age, and reddening, it didn't work. Age just went to the upper limit and reddening was off from the previously stated value significantly as well. Just looking at the plot showed it to be wrong as well.

I marginalized over the parameters in tau squared space, and the peak the code was returning for reddening was not the peak that the plot showed. I don't know what is going on. Age was just an increasing line, but somehow distance still worked quite well. I need to figure out why this it not working. I believe the outliers may somehow be throwing it off, so I need to work a process to account for them into my code.

This is getting to take quite some time to compute, with 11 ages, and 100 for each reddening and distance, with many computations at each combination. To speed things up and hopefully get a code that works, I intend to write a C++ code to fit HR diagrams. Sadly, I do not know how to compile and run the code on my computer since I learned on a fully set up linux computer. So I need to figure that out and write the code. I also need to figure out how to import the multidimensional age array into the C++ code.