From ggriffin@cmbr.phys.cmu.edu Sat Jul 4 15:27:31 1998 Return-Path:Received: from belmont.astro.nwu.edu. by clark.phys.nwu.edu (4.1/SMI-4.1) id AA05729; Sat, 4 Jul 98 15:27:30 CDT To: Giles Novak , Matt Newcomb , Jeff Peterson Subject: integrating SPARO Giles, Matt, correct me if I'm mis-speaking here anywhere, and add whatever. Matt and I can't think of any fundamental problem with giving control of the pointing pc to your mac. I mean, it runs against the grain of the existing system, but it could be made to work. However, there are issues to hash out, and we don't want to find ourselves hashing them out at pole, and making Matt stick around any longer than he has to. For one thing, we should minimize modifications to comsoft. Comsoft is written in DOS and can only occupy 640 KB. With all the code we added, we're pushing that limit to the point where, for every line of new code that is added, something has to be jettisoned. So, if possible, Bob should modify yerk to talk to comsoft, and not the other way around. This entails yerk opening a socket and blasting out commands in the appropriate format (which Matt has more-or-less documented). Is it easier to blast them directly to comsoft, or, to comsoft via Matt's server? What does this yerk program expect back from comsoft, and, again, does that go directly back or back via the server? What sort of chopper waveform and trigger pulse do you require, and, are we currently capable of generating it (will a simple signal generator do the job)? How does the chopper trigger pulse get to your datataking system? What exactly are Bob, Matt, and Greg expected to do to make this all work? These are a few of the things we need to nail down soon. I can set up a comsoft PC here at CMU for testing purposes. Matt has a mac at pole that he can use to test things down there (as far as he can without having your data system). ------------------------------------------------------------------------------------- Anyway, you asked me to write up how the current system worked. I don't know how much of this info is relevant but here goes. Matt currently has 4 devices, which share a common subset of basic commands and a common format for outputting their data: a) comsoft pointing PC b) VXI datataking PC c) temperature-monitoring PC d) dewar-monitoring PC All devices communicate (over the network) with a server running on a (Linux) control computer. The user communicates with the devices, via the server, by executing tcl commands. You can think of the server as a center of a wheel, with spokes running to all the devices and to the tcl script. The point is, messages, responses, data, etc. never go AROUND the wheel, just in and out. The server a) relays the tcl commands to the appropriate devices b) returns the responses from each device c) returns data from each device, which may be stored to disk and/or accessed by the script An example script can be seen on http://cmbr.phys.cmu.edu/viper. That's a very old script: Matt has made many modifications since then. For example, the the tcl script can now access the returned data. It can poll comsoft's current position and wait until the telescope is within some threshold of where it ought to be, before taking data. In the older example script, it just waits an appropriate period of time. The script basically does some setup (filters, clock synchronization, etc) and then procedes to raster across the sky, repeatedly doing the following: tell pointing device to move here tell datataking device to take this much data We've found that having the flexibility of a script is very useful. Especially for scanning something like the galactic plane, where you want to trace, not a simple grid, but an elliptic path across the sky. The VXI is triggered (1200 Hz) by a box (inside) that generates a triangular chopper waveform and n triggers per chop (where n=128,256 or 512) This triggers the VXI data system (outside), which records: 4 bucked high-gain (x500) data channels 4 DC channels (used for calibrations, sky dips) 300 mV ~ approximately 300 K 1 LVDT from the chopper The VXI and Labview/NT datataking PC are outside, on a 100Mb fiber connection. There is a fiberoptic keyboard/video extended that allows you to access things from inside. For that matter, all the devices can be accessed (via a video/keyboard switcher) from a single keyboard and screen. But, once things are working you hardly ever have to use this, you just send commands from the central control PC. The comsoft PC accepts all the commands from the manual, plus a few extra, over the network. When instructed to, it beings sending a telemetry stream back to the server, which is then stored on disk. The data rate is low (6 channels, ~ 10 Hz ), and consists of 6 floats: current ra, dec current az, el (encoder values) requested ra, dec Actual observing works like this: you execute a tcl observing script, and the server saves the data from the various devices to the control computer's disk. When the script has completed, you end up with something like this: 980628.hmt/ec2_124510/data => data from the datataking pc 980628.hmt/ec2_124510/pointing => data from the pointing pc 980628.hmt/ec2_124510/DewerTemp 980628.hmt/ec2_124510/heaters The chopper LVDT, a significant part of the pointing data, is actually stored outside at the datataking pc, not by the pointing pc. Because the clocks of all devices are synched at the beginning of each observation script, all the (timestamped) data and pointing info can be combined in software. On the control pc, in matlab, you type, "imagemake('980628.hmt/ec2_124510',ch(1:2));" and an image pops up on the screen, taking into account the currently active pointing model, transfer-function deconvolution mode, atmospheric subtraction mode, etc. The devices are all fairly dumb, just saving the raw data to disk, and do not pre-chew the data: that is (delibrately) left for the control pc or analysis workstation to handle. So, Giles, that's a rough overview, if there are any parts you want more info, or have specific questions, just give a holler. -Greg