Summary of dicussion during chop-nod analysis telecon of Feb 6, 2008 Hiroko, Larry, Lero, Mike, John, Giles _________________________________________________________________ M82 analysis - April 2007 runs: file 36784: Larry explained that the error in I is calculated from the variance in the four measured values of I (=H+V) for the four different hwp angles. This means that the error in I is itself subject to error, since it is being estimated from only four points. We just got unlucky in that the four measurements were by statistical fluke very close. Solutions are : (a) watch out for this and cut files when it happens (b) fix sharpinteg so that anomalously low sigma-I measurements are inflated. Two ways to do this are set all sig-I values that are lower than the median sig-I value equal to the median, and set all sig-I values that are less than the median by more than three sigma equal to the median. Larry will implement both (with a flag to choose which or neither) sometime in the next week or two. Lero is trying to learn from Mike how to correct the pointing for M82. In the process, they discovered that when Mike and Lero run sharpinteg on the same file, they get different answers for what the EL is. Lero gets one answer or zamin, and Mike gets a different answer on his mac. The next thing is to make sure they are using the same flags. Larry explained that the way he gets EL out of the raw file is to take some kind of a median of the time-stream values of EL in the file. Its hard to see how Lero and Mike could get different answers. This bears further investigation. (The discrepancy in El is less than a couple of degrees.) (Mike uses fv to examine the header - this should work.) _________________________________________________________________ Reduced-chi-squared (rcs) procedures: Mike reported results from rcs of Feb 2007 M82 data. mean rcs of Q and U are 2.3 and 1.7 provided that you discard three very high values for the U case. (bad data ?) But the spread of rcs values is high (the std is of order 10 for Q). To see if this is consistent with random statistics (remember the rcs is determined from just 4 bins = 3 degrees of freedom) one needs to plot up the distributions and compare with the formula in Bevington. This is the next step. The rcs of I is low (~0.01). This means that the sig-I are overestimates by a factor of ten. I think this may be expected. When I looked at the DG Tau data I found that the I maps looked like a point source but with a random DC offset superposed (a sky noise effect). From this I would guess that the four I maps that are used to get the sig-I have similar random DC offsets. These will affect the sig-I adversely. But then John does background subtraction which to a large extent removes this problem. So the errors are overestimates or the true uncertainty in the final I-map produced by sharp-integ. Sometimes the rcs is negative. This is rare and is an artifact of the way that the error in Q is reconstructed from the error in q and error in I from sharpcombine. We discussed this reconstruction process and its pitfalls in December. Mike pointed out that in the individual error-estimates for the four bins this also happens, and if you set these to NaN then a large part of your rcs map comes out to be NaN. One important next step is to track down the few points that are giving very high rcs in the U map. _________________________________________________________________ The following items were not discussed on the telecon: Mike's plans to make a smoothed tau for November 2007 listserve alternate ideas for telecon - iChat (video?) on Mac postscript problems on Lero's Mac and on kilauea, and Hiroko's investigation of the kilauea problems. In any case, I guess these are solved. _____________________________________________________________