Post by GaryDA good example of a distributed operating system and their
applications can be found at http://www.GreenTeaTech.com. You can find
the whitepaper and free download there.
The GreenTea software is a typical example of a distributed network
computing platform on which applications can be developed as if it is
developed for a local system but GreenTea platform enables it to run
on a network of computers seamlessly.
Is that a good thing? For performance (among other reasons like determining
what an appropriate response should be in case of an error) one should
always treat remote and local resources differently. Its great that the
system makes distribution easy, as long as it does not hide any information
about the resource's status (ie: local vs remote). Early EJB's are an
example of a theory (transparent distributed objects/components) that failed
in practice. Older EJB's were completely separated from concerns about
where resources were located. Sun had to introduce 'EJB local interfaces'
to make things faster. (ie: with the older EJB's, even communciation
between an EJB and local processes was done over IP).
Remote resources should typically be used sparingly. Often, when what is
remote and what is local is unknown, a developer may start to use remote
objects as frequently as remote ones. Given the typical number of objects
in medium to large OO applications these days, I would imagine this would
cause severe communication performance problems.
Check out this mid-90s paper from Sun:
Abstract:
We argue that objects that interact in a distributed system need to be dealt
with in ways that are intrinsically different from objects that interact in
a single address space. These differences are required because distributed
systems require that the programmer be aware of latency, have a different
model of memory access, and take into account issues of concurrency and
partial failure. We look at a number of distributed systems that have
attempted to paper over the distinction between local and remote objects,
and show that such systems fail to support basic requirements of robustness
and reliability. These failures have been masked in the past by the small
size of the distributed systems that have been built. In the enterprise-wide
distributed systems foreseen in the near future, however, such a masking
will be impossible. We conclude by discussing what is required of both
systems-level and application-level programmers and designers if one is to
take distribution seriously.
http://research.sun.com/techrep/1994/abstract-29.html
Post by GaryDDevelopers on the GreenTea
platform do not have to be concerned with how many machines the
application will run on. The underlying GreenTea platform takes care
of distributing and transporting the tasks to remote machines, and
collecting results for the applications. Since GreenTea is a totally
decentralized platform, not only one person or one server has access
to the immense resource, but everyone on the network can have the same
level of access to the resources. So it is also a true Peer to Peer
computing paradigm.
We have found GreenTea quite useful, intuitive and easy to use in our
environment. GreenTea allows us to use 50 office machines to form a
virtual supercomputer on which we can run our GreenTea-enabled
applications. We have not found any major problems with it, except
that we have to install Java JRE on each machine first.
Interesting. Have any of the points I mentioned above caused problems? If
so, how did you get around them? If not, how does the system (GreenTea)
mitigate them?
l8r, Mike N. Christoff