[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference pamsrc::decmessageq

Title:NAS Message Queuing Bus
Notice:KITS/DOC, see 4.*; Entering QARs, see 9.1; Register in 10
Moderator:PAMSRC::MARCUSEN
Created:Wed Feb 27 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2898
Total number of notes:12363

2845.0. "DMQ TCP/IP driver hangs running customer benchmark" by ROM01::OLD_BOCCANER () Wed Apr 09 1997 14:43

Hi 
I'm working on a project at a large TLC account in Rome.
Project's goal is to migrate to DMQ interface all of customer's applications
related to GSM Mobile area.

A project's activity was to build a benchmark to test DMQ performance.
We wrote a Collector Module and an Agent module both running on OpenVms and 
DEC Unix platform, TCP/IP was selected as only comm. protocol.

The benchmark mechanism is simple, Collector send messages to Agent, that 
resend the same message somethinglike a "mirror". Collector collects 
elapsed time of the whole sequence, then writes down information on file.

Operational parameter are :

. message size
. number of messages exchanged
. Delivery/Uma model
. type of send model : sicronous (collector sends a message and waits for
  response) or burst (Collector sends a burst of message (for example 500 or 
  1000) then waits for response.

Platform is OpenVms 6.2 DMQ 3.2a TCP/IP services 4.x and DEC Unix 3.2c DMQ 3.2

During a test suite with this parameter : 
380 bytes size
1000 messages
PDEL_NN_MEM / DISCL Delivery(Uma) model 
burst model

with collector located on OpenVms and Agent located on Unix, I observed a 
strange condition on OpenVms side (seen with MONITOR option of DMQ$MENU
procedure on OPenVms) :

1) Collector send 1000 messages
2) On TCP/IP link driver queue 374 messages was sent
3) On Agent's DMQ queue 374 messages was received and, of course, 374 messages
   was sent
4) On TCP/IP link driver queue 374 messages was received
5) No message was received by the COllector
6) no other message was sent or recived  by TCP/IP link driver after several 
   minutes
(this test was repeated for two o three times with same result)

If another direction ( Collector on Unix and Agent on OpenVms) is selected 
processing is very slow.
 
Then we ran the same test changing Delivery/UMA model to PDEL_WF_MEM/DISCL 
(on both end of communication) and all goes fine, the whole processing time was 
about 9 seconds.


>> I suppose DMQ works like this :
>> Collector send message but all messages are queued on TCP/IP link driver
>> queue on OpenVms system.
>> Then link driver starts to send messages to Agent, but the flow ( Agent
>> re-send the message immediatly) causes TCP/IP link driver queue run out
>> of quota and TCP/IP link driver goes in hang state. Message block queue
>> MEDIUM parameter was set to 4000 * 512 bytes size according to a 1000*2
>> messages. When I observed this, all of MEDIUM message blocks was available
>> for use.

Where is the problem ? 
Would give me some in depth information about internals communication of
DMQ servers ?

regards
Stefano

T.RTitleUserPersonal
Name
DateLines
2845.1Need more info...KLOVIA::MICHELSENBEA/DEC MessageQ EngineeringWed Apr 09 1997 15:2716
...in order to tell what is going on.  Specifically:

	- DmQ init files from both the unix and VMS groups

	- logs files from both (i.e. is VMS logging lots of lost
	  messages?)


>If another direction ( Collector on Unix and Agent on OpenVms) is selected 
>processing is very slow.

	  What does "very slow" mean specifically?



Marty