Post by d***@hotmail.comJust a note. I am beginning to regret the title of my post. I hope it
does not give the wrong impression in terms about my respect for the
practice of DC (vs the theory of DC). It was initially due to
frustation over the popular definition of "DC = SETI", as oppposed to
"SETI is a type of DC system".
I wondered. In this particular case, a large population of speakers and
listeners came seemingly out of nowhere and began using the term
differently (or at least more narrowly) than those before. I expect
their numbers will diminish. Then again, some people (in this very
group, I think) have suggested that my own definition of the term is not
fully consistent with that used before my time in the field (e.g. 70s
and 80s). C'est la vie.
Post by d***@hotmail.comPost by David C. DiNucciI firmly believe in the ability of theory to provide insight and deeper
understanding while we attempt to prove or disprove various results, but
I also believe that theoretical results pay off only when they give us
useful answers to real-world problems. What bothers me most is people
who view our current practice as immutable, and theoretical results as
just providing tools to explain why they are inadequate in various ways.
If the theory doesn't motivate solutions without those inadequacies,
it's not telling us where to put our time, but only where not to put it.
True. But there is value in knowing where not to put one's time in
(like the halting problem in classical theoretical CS). See below for
more on this.
Sure, but the halting problem is not at all obscure--it refers to the
computation we all do every day, and there are no ways of hiding the
effects of the halting problem by building better tools or more layers.
We can make decidable computational models, but they are also less
powerful. If you want to claim an analog for layering communication in
real-world systems (see below), or if you think you already are, I'd be
more willing to consider some of your cited papers as important.
snip
Post by d***@hotmail.comPost by David C. DiNucciin which case they should wait for him to wake
up,
Not sure about what you may mean here in terms of what you feel the
processors 'should' and 'should not' do as dictated by the assumptions
of the proof. However, we are both writing very informally.
Right, maybe I should have said "in which case they may have wanted to
wait for him to wake up".
snip
Post by d***@hotmail.comWell, in this case we would not be dealing with an asynchronous system,
'must-be-dead exception') as a type of pseudo failure detector.
By this definition, it sounds like the only true asynchronous systems
would be those with subsystems in radically different inertial frames,
or at least where message delivery time is completely unbounded. They
are so rare, I'm not sure anybody but a theorist wastes time trying to
reason about them.
Post by d***@hotmail.com[on a side note: you may want to check out the idea of 'logical clocks'
- a very interesting idea http://en.wikipedia.org/wiki/Logical_clocks]
I am very familiar with logical clocks, and that work could be used as
another example of taking existing confusing practice (in this case,
describing distributed events as a sequence), considering that practice
as immutable, and developing some reasoning to help iron out just a
little bit of the confusion. In this case, logical clocks help to
expose just a little bit of the causal partial ordering of events, by
assigning a logical time L to each event such that if eventa precedes
eventb in the p.o. then L(eventa) < L(eventb).
But it begs the question: Why are we using sequences to describe these
partially-ordered event traces in the first place? Why are we
expressing the systems that generate these event traces as sequential
processes (i.e. wound-up event sequences) with arcane event constraints
between them (e.g. in the form of locks or messages)? Why aren't we
expressing them in terms of folded up event partial orderings, like a
Petri Net, the same way that we describe sequential events as wound-up
sequences, as sequential algorithms/programs?
At some level, deep down, of course the answer is "because we don't
currently build computers that support petri-net-like semantics in
hardware" (and may never). So, fine, there are a few people close to
the hardware who may need to deal with low-level issues to implement
those higher-level semantics and/or reliable communication, but the
reason they do so is so the rest of us won't have to care. Those people
should (and do) at least use the resources at their disposal...like a
clock on every processor that ticks off at about the same rate as the
clocks on the other processors, and hardware that can deliver messages
in some bounded time (as measured by those clocks) or be considered broken.
Post by d***@hotmail.comBut check out the paper "Unifying Synchronous and Asynchronous Message
Passing Models". The authors use combinatorial topology
And it uses a combinatorial topology because it deals with global states
of distributed systems. At that point, it becomes completely useless to
me, because I have no interest in reasoning about, or even postulating
the existence of, the global state of a distributed system.
Post by d***@hotmail.comto unify
asynchronous, semi-synchronus and synchronous message passing models
So, someone first makes an ad hoc categorization of message passing
models, and then someone unifies them. Though this is not the best
example, it does fit with the theme of some theorists solving problems
that they themselves have devised rather than problems that the real
world is having.
I am not going to claim that this work, or some of the other work you've
cited, is completely useless for everyone, but it does look like it
would be a waste of my time to explore it much further, in part because
I see no need to ever deal with either asynchronous or synchronous
message passing models (by the definitions you are using), especially
ones where the death of a process cannot be detected, and in part
because I have no desire or need to deal with this somewhat illusory
concept of a global distributed state.
big snip
Post by d***@hotmail.comBut back to FLP for a sec... Part of the importance of the FLP result
was that it led to a complete characterization of the decision tasks
that are solvable in an asynchronous system. This characterization is
constructive, meaning a positive result (ie: possibility) does not only
prove the existence of a protocol to solve the task, but immediately
allows one to essentially 'plug the task into' a generic protocol that
actually solves it.
Good. Now give me a real-world example of an asynchronous system, or
better, an asynchronous system that cannot be fairly easily converted to
a semi-synchronous system.
big snip
Post by d***@hotmail.comFinally, it was the FLP paper that first explicitly used 'state
indistinguishability' in a proof - a key idea in more than just DC.
In that case, this paper marks a wrong turn that some researchers have
apparently still not recovered from. It's description of a
configuration being a global system state, and of a step as going from
one configuration to another, is an approach borrowed from old
sequential models like the TM, and it is not a surprise that applying it
to fields like DC yields the sorts of complexity that set researchers
down these paths from which they'll seemingly never return.
snip
Post by d***@hotmail.comThe impossibility of faster than light travel is a cornerstone of the
theory of relativity, for example. I understand the theory vs practice
thing, but for something as complex and as inherently mathematical as
distributed computing, its seems that at some point we will have no
choice but to employ more esoteric math to aid us. DC is just another
example of a 'complex system in my book.
I only wish that the majority of the distributed systems camp would have
taken issues like relativity a little more seriously. Why try to regard
the global system state when there's no entity that experiences that
state? When each element of the system can only perceive its
environment via signals (messages,memory) that take time to traverse
space? And for synchronous models, why try to pretend that these
signals don't take such time? Answer: Because it feels easier to get
one's one-track mind around this one big thing than lots of little
things. It's an illusion.
Reasoning tools, like partial orders and atomic transactions and Patri
Nets, have long been available to get us away from global system states
and combinatorial state explosion, and to help achieve non-interference
and modularity and causal reasoning, even using our one-track minds.
These tools, like those you've cited, also came from theoretical
insight, but it was insight about real-world problems which motivated
real-world solutions, instead of about manufactured problems which
motivated only more theory.
Over a decade ago in my own dissertation, I drew upon many of those
tools to devise a computational model (F-Nets) to address many of the
shortcomings of reasoning about global distributed state, and I have
since been building the necessary software engineering structure and
tools around it (Software Cabling and ScalPL). F-Nets' semantics can be
expressed as 5 fairly common-sense axioms on their executions (which are
expressed as partial orderings). Reading your post, I find it
extremely frustrating to consider that today's theoreticians might be
almost completely ignoring such productive directions to spend their
time instead on complex, unrealistic, and wrong-headed pursuits.
[If it sounds like I'm getting too worked up about this and completely
dismissing your arguments, don't be too sure that I'm not playing
devil's advocate.]
-- Dave
http://www.elepar.com/
http://www.elepar.com/Passworded/references.html