Before I respond point by point, I read later in the thread and see
another person said any attempt to emulate a mix of synchronous and
asynchronous messages within an underlying synchronous API is
doomed to be a CPU hog. I want to fully study your problem before I
agree or disagree with that opinion.
Post by Dan EllisSome software I'm writing uses an asynchronous protocol in which
messages are exchanged between two peers whenever required. Some of
these messages are asynchronous events, which match the protocol
nicely, and some are message/response pairs that I would like to be
handled synchronously by the API. When the API sends a synchronous
message, other async messages may be received before the reply.
First crucial question: What is the operational model of the given
asynchronous API? My guess is that there's a registry of listeners
for various types of messages, similar to Java's event listeners in
swing or AWT. Whenever a message comes in which matches one or more
listeners, those listeners are activated either in sequence by
priority or in parallel. Whenever a message comes in that nobody is
listening for, either it is ignored, or an error is signalled. Is
that essentially correct? (It doesn't matter *where* the listener
registry is located, down in the system, or in the application.)
Second crucial question: What operational model do you wish to
implement? When there's no synchronous exchange in progress (query
already sent, response not received), I assume you want exactly the
underlying asynchronous behaviour. But when a synchronous exchange
is started, what is the new *mode* of the overall application?
- All threads (except GUI and message-receiver-distributer) are
locked. If any message except the awaited one arrives, it is
simply put in a queue, no action whatsoever occurs (except
classifying it as not-wanted-now). Because all threads are locked,
it's impossible for another synchronous exchange to be started.
(It's a bug if something in the GUI or message-receiver-distributer
thread somehow invokes another synchronous exchange start.
Ideally there should be a compile-time protection against this
ever happening.) Is this a client/server protocol, where *only*
this client computer, not the other server one, is allowed to
initiate synchroous requests? Or is this a bidirectional RMI
protocol where either of the two machines is allowed to initiate
a synchronous request to the other?
- Threads are kept running just like normal, except the one thread
that is waiting for the synchronous response. But still no other
thread is supposed to start another synchronous interchange at
this time. This is enforced either by only that one thread ever
having access to the sychronous emulation mechanism, or by a
runtime error signalled if another process tries to do it during
this time. Queuing of incoming asynchronous messages occurs as
above. Active threads are allowed, or not allowed, to generate
outgoing asynchronous messages during this time? Same question
as above about other compute allowed to initiate synchronous
requests?
- More than one synchronous interchange can be in progress at the
same time, but they must be strictly nested. Each process waiting
for a response is locked until it gets its response. For example:
Thread1: run....[REQUEST...stoppppppppppppppppppped...RESPONSE]...run.
Thread2: run.............[REQUEST...stopped...RESPONSE]...run.........
- With bidirectional synchronous interchanges, this nesting also possible:
Thread1: run..[REQUEST...stoppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppped...RESPONSE]...run...
ReqListener: [RequestFromOtherHost...run..[REQstopRESP]..runReplyToOtherHost]
On other host, it looks like this:
ReqListener1: [RequestFromOtherHost...run..[REQstoppppppppppppppppppppppppppppppppppppppppppppppppppedRESP]...ReplyToOtherHost]
ReqListener2: [Rfoh...Rtoh]
Post by Dan EllisIs there a known pattern for dealing with this in the API?
I'm not an expert on officially-known design patterns, but I
vaguely believe that maintaining locks to synchronize processes
might be one. Whether it is, my idea, which might work with any of
the variants I described above, is given later below.
Post by Dan EllisMy idea is to have message sending and receiving done in one
thread, and execution flow in the other.
Message sending/receiving involves both your application and the
system, hence already involves more than one thread, only some of
which you as a programmer can directly "orchestrate" (program).
Furthermore your main application probably already has more than
one thread, for example if it has a GUI that runs immediately in
foreground whenever a user event is processed while the main
process of your application is shut out during the processing of a
GUI event, and the GUI itself has a timeshared thread that polls
for hardware interrupts at frequent intervals (or the system polls
for hardware interrupts and generates system interrupts, while the
GUI event listener polls for any pending system interrupts).
If my general model above correct, there's a process either in the
system or in your JVM or deep in your application above the API
that takes a single stream of incoming messages and distributes
them to all appropriate message listeners. Thus your various
message listeners may invoke a system call to register listeners
with user callbacks for each, so the system invokes the callbacks
later, or your various message listeners may call your own code to
register them in your own listener registry, so your own code
matches message types against the registry to invoke the
appropriate callbacks.
Post by Dan EllisThat way the first thread can queue messages received while
waiting for a synchronous response and deliver them to the second
thread after giving it the synchronous response.
I have more in mind that the deep-down code to handle the listener
registry
(I think this requires that your own code handle the listener
registry, see later an idea to use if the system normally
handles the registry)
will queue any message that is valid
(has at least one listener)
but is not appropriate to invoke the callback right now
(because all such incoming messages are locked until the
synchronous call completes)
and then when when the synchronous reply finally arrives this
deep-down code will first invoke the synchronous callback and then
map down the queue of pending asynchronous messages invoking each
set of listener callbacks.
So here's my basic idea: The thread that requests the sychronous
interchange has a declaration like this (all syntax Javaesque):
- private static Message staticReplyMessageBuffer;
- private static Thread myThread;
and where it actually requests the synchronous interchange it does this:
- private Message outgoingMessage;
- private Message incomingMessage;
- code to set up the outgoingMessage ...
- myThread = self; /* So that myCallback knows which thread to wake up */
- MyMessageRegistry.sendSynchronous(outgoingMessage, myCallback());
- myThread.setRunnable(false);
- incomingMessage = staticReplyMessageBuffer; /* won't
execute immediately because self is no longer runnable. */
- code to process the incomingMessage ...
myCallback(Message msg) is defined as:
- staticReplyMessageBuffer = msg;
- myThread.setRunnable(true); /* The suspended code above will finally run */
Now the code as given there has a race condition. Suppose that just
after the interchange-request code calls sendSynchronous, but
before it can call setRunnable(false), it gets suspended. While
it's suspended the complete message transaction occurs, including
execution of setRunnable(true). Now the original thread finally
gets resumed, and it calls setRunnable(false), and hangs forever.
Somehow you need to protect against that happening. The critical
section of code where sendSynchronous and setRunnable(false) are
called must be protected from interleaving with the single line of
code setRunnable(true), by some kind of *lock*. How you do this
depends on what programming language you're using. If you're
writing in C talking directly to Unix system utilties, you probably
will use a lockfile, with calls to system lockfile utilities to
avoid being a resource hog polling for the lockfile. If you're
programming in Java, you probably will use Class-synchronized
methods, which are "guaranteed" by the JVM to avoid CPU hogging.
In other languages, I have no idea, and in any case I'll leave the
details of locking the critical sections from each other to your
expertise in whatever language you're programming in.
So I ask the expert who said this would be a CPU hog no matter how
it is done to check what I suggested above and tell me if I found a
non-hog way to do it or not. I'd appreciate being enlightened if
I've overlooked some fatal flaw in this idea.
And to the OP, if my basic idea *is* correct (CPU non-hog, and race
condition resolvable by my suggested locking) does your programming
language support any suitable locking mechanism as would be needed?