Fri, 07 Apr 2006
Having disposed of the misconception that synthetic instruments are slow, let me knock down the mistaken idea that synthetic instruments lead to a "more significant software task for the end user or system integrator".
Here we have one of those falsehoods that derives a good deal of plausibility from its proximity to the truth. Indeed, synthetic instruments are software. Thus it is nearly a tautology to say that creating synthetic instruments involves software, to some extent. They certainly will involve more software than a natural instrument created out of measurement specific hardware. For example, a mercury thermometer has little, if any, software involved in its "system integration" or end use. A digital thermometer, on the other hand, would certainly involve some software.
If you are irrationally scared of software, a nurse may terrify you by revealing that there's software in the thermometer she's stuck under your tongue. If, instead, you are irrationally afraid of mercury, maybe the thought of inserting a fragile, glass encapsulated vial of the deadly silver fluid into your body sends shivers down you spine. In any case, we can dismiss these irrational fears as quite tangential to a reasoned discussion of the merits of instrument design.
Perhaps the assertion that synthetic instruments create a more significant software task for the end user or system integrator isn't merely a tautology, nor is it an expression of software anxiety. Maybe it's trying to claim that synthetic instruments somehow are more work, and that this extra work involves software.
If this is the actual claim, I can only ask: "more work that what?" What are synthetic instruments being compared against? Against legacy test gear so called "traditional" instrumentation? Is that the comparison? Because if it is, I think the assertion is clearly false. In fact, the whole reason the synthetic instrument concept was originally created was to avoid the massive software effort surrounding the maintenance of legacy test gear in automated test equipment (ATE).
Let's say you have thousands of ATE systems deployed in your operations, each of which comprises a score of measurement-specific instruments in rack mounted or modular packages. Let's also say that you have spent billions of dollars writing test programs to run the instruments in these ATE systems, and that you are really happy with how it all works.
What do you do when the legacy, measurement-specific instruments in your ATE racks start to get old? Hardware instruments aren't manufactured forever, and the day will come sooner or later when you no longer can buy the same make and model of your legacy instruments. Maybe you can't even buy new instruments to plug into the now-obsolete modular packaging scheme that was once as popular as disco music. Changing even one modular instrument requires that you change them all.
What do you do? Do you buy new measurement specific instruments, plug them into new mainframes, and rewrite millions of lines of test program code? To me, that sounds like a significant software task for the end user or system integrator.
Synthetic instruments were conceived as an alternative to this quandary, and thereby were conceived as a way to reduce the software task for the end user or system integrator. If we can synthesize an instrument purely in portable software that runs or measurement-generic hardware, then maybe if we do our job right we can actually replace, upgrade, or otherwise alter the underlying hardware without changing a single line of test program code.
It should be clear, therefore, that synthetic instruments, properly done, will not increase the software task involved in maintaining or upgrading legacy ATE. Quite the contrary: they reduce or even eliminate it.
Posted Apr 07, 2006 at 14:42 UTC, 613 words,  Permalink