Discussion:
Synchronous calls over asynchronous protocol
(too old to reply)
Dan Ellis
2007-10-13 14:20:24 UTC
Permalink
Some software I'm writing uses an asynchronous protocol in which
messages are exchanged between two peers whenever required. Some of
these messages are asynchronous events, which match the protocol
nicely, and some are message/response pairs that I would like to be
handled synchronously by the API. When the API sends a synchronous
message, other async messages may be received before the reply.

Is there a known pattern for dealing with this in the API? My idea is
to have message sending and receiving done in one thread, and
execution flow in the other. That way the first thread can queue
messages received while waiting for a synchronous response and deliver
them to the second thread after giving it the synchronous response. Is
there a better way?

Regards,
Dan
Ico
2007-10-13 14:48:58 UTC
Permalink
Post by Dan Ellis
Some software I'm writing uses an asynchronous protocol in which
messages are exchanged between two peers whenever required. Some of
these messages are asynchronous events, which match the protocol
nicely, and some are message/response pairs that I would like to be
handled synchronously by the API. When the API sends a synchronous
message, other async messages may be received before the reply.
Is there a known pattern for dealing with this in the API? My idea is
to have message sending and receiving done in one thread, and
execution flow in the other. That way the first thread can queue
messages received while waiting for a synchronous response and deliver
them to the second thread after giving it the synchronous response. Is
there a better way?
Which method is 'better' depends on a lot of things. What does the
receiver need to do when an event arrives ? You could make a generic
receive function which you use for synchronous transfers as well. When
this function receives an answer to an earlier request, your synchronous
call is done. If the function receives an event, you could simply store
it in a queue which you can poll after the synchronous call finished, or
you might even be able to act on the event right away while in the
synchronous send/recv function.
--
:wq
^X^Cy^K^X^C^C^C^C
H. S. Lahman
2007-10-13 15:33:17 UTC
Permalink
Responding to Ellis...

You didn't mention what the environment is. This is important because
the OO paradigm has built-in restrictions on which communications can be
synchronous and which must be asynchronous.
Post by Dan Ellis
Some software I'm writing uses an asynchronous protocol in which
messages are exchanged between two peers whenever required. Some of
these messages are asynchronous events, which match the protocol
nicely, and some are message/response pairs that I would like to be
handled synchronously by the API. When the API sends a synchronous
message, other async messages may be received before the reply.
If I understand this correctly, you want a subsystem interface to
provide a single synchronous call to the client that does not return
until the subsystem processing is done but the subsystem processing is
internally asynchronous (i.e., has an internal event queue).
Post by Dan Ellis
Is there a known pattern for dealing with this in the API? My idea is
to have message sending and receiving done in one thread, and
execution flow in the other. That way the first thread can queue
messages received while waiting for a synchronous response and deliver
them to the second thread after giving it the synchronous response. Is
there a better way?
Alas, I don't think so. Essentially you are a victim of the
infrastructures provided by the OS and language to support asynchronous
processing when the hardware computational models are inherently
synchronous. At the 3GL level those infrastructures don't respond well
to converting asynchronous to synchronous in this situation. Basically
you have:

Interface::doIt()
push event on subsystem queue
wait for completion
extract and return a result

The tricky part is waiting for completion. If the OS does not provide a
convenient 'wait' service, you will have to poll something to see when
the processing is done. If you poll, you will probably need a
prioritized thread to keep the polling loop from hogging the CPU.

So doing this will tend to be a pig. That's why the OO paradigm only
allows knowledge to be accessed synchronously in the OOA/D while
behavior is accessed asynchronously. Since knowledge is static it can
always be accessed by a single synchronous service in the subsystem
interface, regardless of how many objects contain the data set. So the
API design protocol would broken up to look like:

Client Service
| |
| trigger behavior to modify data |
|----------------------------------->|
| |
| I'm done with behavior |
|<-----------------------------------|
| |
|access results (synchronous service)|
|----------------------------------->|
| |

IOW, in the OO paradigm behavior responsibilities and knowledge
responsibilities are always independent so they are encapsulated and
accessed separately. You can then employ a state machine on the client
side to enforce the sequencing constraint between the behavior
invocation and data access.

<aside>
People who are just used to OOPL programming often don't understand why
the OOA/D is so picky about separating knowledge/behavior and
restricting synchronous/asynchronous access. This is a good example of
why OOA/D is so picky. If the behavior and results are accessed together
one is easily painted into the polling/treading corner at OOP time. And
it gets much worse for interoperability infrastructures in distributed
environments that need to manage remote object access.
</aside>


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
***@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
***@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH
Diego
2007-10-16 14:08:09 UTC
Permalink
Post by H. S. Lahman
Interface::doIt()
push event on subsystem queue
wait for completion
extract and return a result
The tricky part is waiting for completion. If the OS does not provide a
convenient 'wait' service, you will have to poll something to see when
the processing is done. If you poll, you will probably need a
prioritized thread to keep the polling loop from hogging the CPU.
So doing this will tend to be a pig. That's why the OO paradigm only
allows knowledge to be accessed synchronously in the OOA/D while
behavior is accessed asynchronously.
Perhaps I am being totally clueless, but what is the relation between
polling and a programming paradigm?
Rephrasing: is there a programming paradigm that solves the polling
problem? In the example above, we always have to poll in procedural,
functional, OO...

Diego
H. S. Lahman
2007-10-16 15:05:03 UTC
Permalink
Responding to Diego...
Post by Diego
Post by H. S. Lahman
Interface::doIt()
push event on subsystem queue
wait for completion
extract and return a result
The tricky part is waiting for completion. If the OS does not provide a
convenient 'wait' service, you will have to poll something to see when
the processing is done. If you poll, you will probably need a
prioritized thread to keep the polling loop from hogging the CPU.
So doing this will tend to be a pig. That's why the OO paradigm only
allows knowledge to be accessed synchronously in the OOA/D while
behavior is accessed asynchronously.
Perhaps I am being totally clueless, but what is the relation between
polling and a programming paradigm?
Rephrasing: is there a programming paradigm that solves the polling
problem? In the example above, we always have to poll in procedural,
functional, OO...
I agree, you always need to poll if you are talking directly to
hardware; there is no other way to recognize an interrupt bit in a
hardware register has <asynchronously> changed unless one looks at it
periodically.

However, the OP was describing a different problem where the software
itself was behaving asynchronously (i.e., behavior communication was
internally event-based within the subsystem) and the OP wanted to
provide a synchronous interface to that software processing. One can
provide a synchronous interface without polling if one changes the
interface:

Interface::doIt()
push event on subsystem queue // access behavior responsibility

Interface::getResult()
extract and return result // access knowledge responsibility

Then the subsystem can tell the client when the result is ready by
proactively sending an announcement message when its processing is done
rather than having the client poll it to see if it is ready.

My point was that in the OO paradigm that comes for free because
behavior (whatever is done in response to the event) is always separated
from knowledge (the results in state variables) in terms of
responsibilities. Since OO responsibilities are self-contained,
independent, and logically indivisible one accesses each responsibility
separately and the subsystem interface and the client/service protocol
would reflect that.

Conversely, if knowledge and behavior are not separated in the
interface, then one is painted into the corner of needing polling with
the attendant overhead.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
***@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
***@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH

Robert Maas, see http://tinyurl.com/uh3t
2007-10-14 02:50:33 UTC
Permalink
Before I respond point by point, I read later in the thread and see
another person said any attempt to emulate a mix of synchronous and
asynchronous messages within an underlying synchronous API is
doomed to be a CPU hog. I want to fully study your problem before I
agree or disagree with that opinion.
Post by Dan Ellis
Some software I'm writing uses an asynchronous protocol in which
messages are exchanged between two peers whenever required. Some of
these messages are asynchronous events, which match the protocol
nicely, and some are message/response pairs that I would like to be
handled synchronously by the API. When the API sends a synchronous
message, other async messages may be received before the reply.
First crucial question: What is the operational model of the given
asynchronous API? My guess is that there's a registry of listeners
for various types of messages, similar to Java's event listeners in
swing or AWT. Whenever a message comes in which matches one or more
listeners, those listeners are activated either in sequence by
priority or in parallel. Whenever a message comes in that nobody is
listening for, either it is ignored, or an error is signalled. Is
that essentially correct? (It doesn't matter *where* the listener
registry is located, down in the system, or in the application.)

Second crucial question: What operational model do you wish to
implement? When there's no synchronous exchange in progress (query
already sent, response not received), I assume you want exactly the
underlying asynchronous behaviour. But when a synchronous exchange
is started, what is the new *mode* of the overall application?
- All threads (except GUI and message-receiver-distributer) are
locked. If any message except the awaited one arrives, it is
simply put in a queue, no action whatsoever occurs (except
classifying it as not-wanted-now). Because all threads are locked,
it's impossible for another synchronous exchange to be started.
(It's a bug if something in the GUI or message-receiver-distributer
thread somehow invokes another synchronous exchange start.
Ideally there should be a compile-time protection against this
ever happening.) Is this a client/server protocol, where *only*
this client computer, not the other server one, is allowed to
initiate synchroous requests? Or is this a bidirectional RMI
protocol where either of the two machines is allowed to initiate
a synchronous request to the other?
- Threads are kept running just like normal, except the one thread
that is waiting for the synchronous response. But still no other
thread is supposed to start another synchronous interchange at
this time. This is enforced either by only that one thread ever
having access to the sychronous emulation mechanism, or by a
runtime error signalled if another process tries to do it during
this time. Queuing of incoming asynchronous messages occurs as
above. Active threads are allowed, or not allowed, to generate
outgoing asynchronous messages during this time? Same question
as above about other compute allowed to initiate synchronous
requests?
- More than one synchronous interchange can be in progress at the
same time, but they must be strictly nested. Each process waiting
for a response is locked until it gets its response. For example:
Thread1: run....[REQUEST...stoppppppppppppppppppped...RESPONSE]...run.
Thread2: run.............[REQUEST...stopped...RESPONSE]...run.........
- With bidirectional synchronous interchanges, this nesting also possible:
Thread1: run..[REQUEST...stoppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppped...RESPONSE]...run...
ReqListener: [RequestFromOtherHost...run..[REQstopRESP]..runReplyToOtherHost]
On other host, it looks like this:
ReqListener1: [RequestFromOtherHost...run..[REQstoppppppppppppppppppppppppppppppppppppppppppppppppppedRESP]...ReplyToOtherHost]
ReqListener2: [Rfoh...Rtoh]
Post by Dan Ellis
Is there a known pattern for dealing with this in the API?
I'm not an expert on officially-known design patterns, but I
vaguely believe that maintaining locks to synchronize processes
might be one. Whether it is, my idea, which might work with any of
the variants I described above, is given later below.
Post by Dan Ellis
My idea is to have message sending and receiving done in one
thread, and execution flow in the other.
Message sending/receiving involves both your application and the
system, hence already involves more than one thread, only some of
which you as a programmer can directly "orchestrate" (program).
Furthermore your main application probably already has more than
one thread, for example if it has a GUI that runs immediately in
foreground whenever a user event is processed while the main
process of your application is shut out during the processing of a
GUI event, and the GUI itself has a timeshared thread that polls
for hardware interrupts at frequent intervals (or the system polls
for hardware interrupts and generates system interrupts, while the
GUI event listener polls for any pending system interrupts).

If my general model above correct, there's a process either in the
system or in your JVM or deep in your application above the API
that takes a single stream of incoming messages and distributes
them to all appropriate message listeners. Thus your various
message listeners may invoke a system call to register listeners
with user callbacks for each, so the system invokes the callbacks
later, or your various message listeners may call your own code to
register them in your own listener registry, so your own code
matches message types against the registry to invoke the
appropriate callbacks.
Post by Dan Ellis
That way the first thread can queue messages received while
waiting for a synchronous response and deliver them to the second
thread after giving it the synchronous response.
I have more in mind that the deep-down code to handle the listener
registry
(I think this requires that your own code handle the listener
registry, see later an idea to use if the system normally
handles the registry)
will queue any message that is valid
(has at least one listener)
but is not appropriate to invoke the callback right now
(because all such incoming messages are locked until the
synchronous call completes)
and then when when the synchronous reply finally arrives this
deep-down code will first invoke the synchronous callback and then
map down the queue of pending asynchronous messages invoking each
set of listener callbacks.

So here's my basic idea: The thread that requests the sychronous
interchange has a declaration like this (all syntax Javaesque):
- private static Message staticReplyMessageBuffer;
- private static Thread myThread;
and where it actually requests the synchronous interchange it does this:
- private Message outgoingMessage;
- private Message incomingMessage;
- code to set up the outgoingMessage ...
- myThread = self; /* So that myCallback knows which thread to wake up */
- MyMessageRegistry.sendSynchronous(outgoingMessage, myCallback());
- myThread.setRunnable(false);
- incomingMessage = staticReplyMessageBuffer; /* won't
execute immediately because self is no longer runnable. */
- code to process the incomingMessage ...
myCallback(Message msg) is defined as:
- staticReplyMessageBuffer = msg;
- myThread.setRunnable(true); /* The suspended code above will finally run */

Now the code as given there has a race condition. Suppose that just
after the interchange-request code calls sendSynchronous, but
before it can call setRunnable(false), it gets suspended. While
it's suspended the complete message transaction occurs, including
execution of setRunnable(true). Now the original thread finally
gets resumed, and it calls setRunnable(false), and hangs forever.
Somehow you need to protect against that happening. The critical
section of code where sendSynchronous and setRunnable(false) are
called must be protected from interleaving with the single line of
code setRunnable(true), by some kind of *lock*. How you do this
depends on what programming language you're using. If you're
writing in C talking directly to Unix system utilties, you probably
will use a lockfile, with calls to system lockfile utilities to
avoid being a resource hog polling for the lockfile. If you're
programming in Java, you probably will use Class-synchronized
methods, which are "guaranteed" by the JVM to avoid CPU hogging.
In other languages, I have no idea, and in any case I'll leave the
details of locking the critical sections from each other to your
expertise in whatever language you're programming in.

So I ask the expert who said this would be a CPU hog no matter how
it is done to check what I suggested above and tell me if I found a
non-hog way to do it or not. I'd appreciate being enlightened if
I've overlooked some fatal flaw in this idea.

And to the OP, if my basic idea *is* correct (CPU non-hog, and race
condition resolvable by my suggested locking) does your programming
language support any suitable locking mechanism as would be needed?
Loading...