Summary of chop-nod analysis telecon of July 16, 2008 Mike, Giles, Lero, Tristan, Megan _____________________________________________________________ Tristan's analysis of DG Tau Tristan gave an overview of his analysis, which is nearly finished, and should soon yield final results for DG Tau. Reduced Chi Squared is below 1 after his background subtraction of Q and U. This is good news. Since he is preparing a memo on this for posting very soon, I won't take the time to summarize the discussion here. _____________________________________________________________ Discrepancy between V 4 and V5 Tristan explained that the discrepancy between sharpcombine V4 and V5 has the following behavior: For one single file (collected in the June run) the discrepancy is comparable to the statistical error. It gets reduced but does not go away for larger resolution on the sharp-combine map. For the entire DG Tau data set, the discrepancy is negligible compared to the statistical errors. (Statistical errors are ~0.5%.) Tristan had earlier thought that there was a significant discrepancy but he was using the small-kernel I to convert from q, u to Q, U. When he uses the large-kernel I, the discrepancy becomes negligible. In fact a discrepancy is expected (due to nearest-neighbor operations, for example) so there is no evidence of any bugs in V4 or V5. Luckily, the discrepancy seems negligible for the only example of a signficant data set where we have tested it. No further action is required, though anyone who wants to analyze their data both ways and compare is encouraged to do so. _____________________________________________________________ Ideas for where to go from here: Giles discussed a problem in using chi2.c to inflate error-bars: If the rchi2 depends on number of bins, what do we do? Giles agreed to write a memo showing that the number of bins used for chi2 should not matter if there are lots of data points in the map, each having a measured redchi2, and if you average over these. So even 3 bins should give valid results for redchi2 in this case. Giles discussed ideas for further progress in eliminating or accounting for systematic error; basically there are three ideas that still need to be fleshed out: (1) Use chi2.c to eliminate Q, U outliers. I.e., compare a sharp-combine map from a single file to the run-average or night-average. Do we discriminate based on average chi2 over a map, or median, or worst point? (2) Try some variant of what Tristan did for DG Tau on other data sets. I.e., can we get rid of bogus DC offsets in Q, U in some other way? median filtering? (3) Try to diagnose what is causing the systematic error by looking for correlations. sky noise? high source flux? drifting DC offsets (loading changes)? The solution will depend on the diagnosis. _____________________________________________________________