|
It's true and false. Because of the increased resource demands, DEC
has been pushing exactly this as a "solution" for those 6mb VS2000
owners (and there are thousands of them). When seen from the viewpoint
of the guy who already has the VAXstation 2000 -- he's not too far off
the mark.
Now on the otherhand, the vision is that the application can now run
where most appropriate... this might be the workstation itself, a Cray,
a VAX 6120, or a Unix hot-box. Viewed this way, it simply expands the
possibilities of one aspect of high-speed local area networks. It does
*not* solve the problem of distributed computing -- it's more akin to an
interactive terminal than to a distributed application environment. It
does provide the lowest-common-denominator solution of providing input
and output across hardware and software boundries.
|
| > Is there anyway to run DECwrite on a single workstation and see the
> split between the Display process and the Client Application process.
What is the purpose of determining this "split"? Why would a 50/50
split be better or worse that a 90/10 split or a 10/90 split? (If I
had my druthers, I would like to see the application apply the biggest
portion of the load since I can choose whether I run the application
locally or remotely. With the server, I don't have that option...)
A simple answer is probably an incorrect one because there are lots of
caveats and gotchas. For example, have you considered the "split" with
the window manager process? Have you considered the amount of
processing that other clients may be doing as you occlude and expose
their windows while using the client you think you are testing?
Also, have you considered which system resources to study? The obvious
one is CPU time, but there is also memory utilization, I/O (both
buffered and direct), page faults (hard and soft), and so on.
Even if CPU time was the "right" resource to study, are all CPU cycles
created equal? For example, if you study CPU usage using the MONITOR
utility, you may find that your average CPU usage is around 20% but
there are brief periods in which CPU usage shoots up to 100%. During
these critical periods, which process is using how much CPU? Which one
has the highest priority?
But if you are still determined, read up on using MONITOR PROCESS to
record data to a file, then analyze *all* data on *all* of the processes.
> Should the performance be .25 percent faster running it on two
> machines.
Be very *very* careful here. What are the two machines? Are they the
same or different? Which one is running the server and which one is
running the client? What else are they doing?
For example, suppose you had a VS2000 w/ 6 megs of memory and a VS3100
w/ 16 megs. If your server is on the VS3100 and it is not doing
anything else, why would you want to run DECwrite on the VS2000 which
has a CPU that is ~3 times slower?
But if you have a VS2000 with 6 megs of memory that is maxed out in
terms of memory and CPU usage, and you have a 6210 that is just sitting
there idle, DECWindows gives you the option of running some or all of
your applications remotely. Yes, in that case you will probably see an
improvement in performance since the idle 6210 has a faster CPU and
more memory than the VS2000. But one rarely has a idle 6210 laying
around and your milage may vary. Try it and see.
*In general,* you want to balance resources against needs. You can
study your system resource availability and process resource needs
using MONITOR, SPM, VPA, etc. and so on.
> Has any testing been done. Pointer to the refs please.
Yup. I just happened to have an idle 6210 laying around :-)
I hope the above has convinced you that the experiment methodologies are
not trivial. My group is discussing how to publish the results in such
a way that the results are not misconstrued...
John B.
|