[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference pamsrc::decmessageq

Title:NAS Message Queuing Bus
Notice:KITS/DOC, see 4.*; Entering QARs, see 9.1; Register in 10
Moderator:PAMSRC::MARCUSEN
Created:Wed Feb 27 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2898
Total number of notes:12363

2850.0. "IQ utility error message" by AYOV29::LTALBOT () Wed Apr 16 1997 14:57

Could someone explain the error message I am getting while running the IQ option within the Link Management section of the DMQ
menu.

While attempting to get the status of any external group linkage I get the message
	"Received message is larger than the user message area".

I am running DMQ v3.2-13(3213) on an Alpha with OpenVMS 6.2-1H2.

At least one of the nodes I an trying to communicate with is DMQ v3.2A-22(3222) on VaxVMS 6.1 Other nodes are running at least 
this level or older versions of DMQ and VaxVMS. While doing a LOOP test to this remote group I also set a
	"Data Verification error".


Some additional background to this problem -

This is not a new application. The original version runs on DMQ v2.1B-36 on VaxVMS v5.5-2. We are currently porting
it to Alpha.  I have done a LOOP test to all the remote groups from the DMQ v2 group with good results. I have been 
comparing this to the same LOOP test on the new Alpha setup. The LOOP test on Alpha produces a large number of Time Outs 
and one site has these data validation errors. 

My dmq$init.txt is attached.

Thanks in advance,

Les Talbot (823-3714)

*************************************************************************
*                                                                       *
*                 DECmessageQ Initialization File                       *
*                                                                       *
*************************************************************************
*
%VERSION 3.0
*
%PROFILE      ***** Profile Parameters *****
*
ACCEPT_KILL_CMD       NO   ! Control MONITOR terminate requests
ENABLE_XGROUP         YES   ! Enable DECmessageQ cross-group access
XGROUP_VERIFY         NO   ! Limit incoming cross-group connections
ENABLE_MRS           NO   ! Enable DECmessageQ Message Recovery Services
ENABLE_JRN           YES   ! Enable DECmessageQ Message Journaling Services
ENABLE_SBS           YES   ! Enable DECmessageQ Selective Broadcast Services
ENABLE_QXFER          NO   ! Enable DECmessageQ MRS Queue Transfer Services
FIRST_TEMP_QUEUE     200   ! Select start of temp queue pool    (200-950)
XGROUP_TABLE_SIZE     25   ! Select max number of group entries (25-32000)
NAME_TABLE_SIZE      200   ! Select max number of GNT entries   (100-32000)
RCV_MSG_QUOTA_METHOD MAX   ! Select type of rcv msg quota deductions (MIN | MAX)
ATTACH_TMO           600   ! Select PAMS_ATTACH_Q timeout       (100-36000)
PAMSV25_MODE          NO   ! Select PAMS V2.5 compatiable DECnet object 
*
%EOS
*
%XGROUP      ***** Cross-Group Connection Table ******
*
*                  DMQ                  
*   DMQ           Group                 Gen Buff Recnt --Window--- 
*Group name        ID      System       Cnt Pool Timer Delay  Size Trnsprt Addr
*--------------- ----- ---------------- --- ---- ----- ----- ----- ------- -----
OVP                66     AYRDSS         S   100  -1    -1    -1    DECNET
ODW_REO           2961   RDGE60         Y   100  300   -1    -1    DECNET
ODW_UTO           2898   HLIS21         Y   100  300   -1    -1    DECNET
ODW_BPS           2836   BPSTEP         Y   100  300   -1    -1    DECNET
ODW_DBO           2817   DUB03          Y   100  300   -1    -1    DECNET
ODW_IYO            889   MLNEUC         Y   100  300   -1    -1    DECNET
ODW_ISO           2815   MACABI         Y   100  300   -1    -1    DECNET
ODW_FNO           2819   HSKAP2         Y   100  300   -1    -1    DECNET
ODW_SQO           2940   MDROSO         Y   100  300   -1    -1    DECNET
ODW_XIP           2839   MINHO          Y   100  300   -1    -1    DECNET
ODW_RTO           2824   RTOOF          Y   100  300   -1    -1    DECNET
ODW_EVO           2909   EVTV02         Y   100  300   -1    -1    DECNET
ODW_CDG           2912   EVOIS8         Y   100  300   -1    -1    DECNET
ODW_BRO           2810   BRSEIS         Y   100  300   -1    -1    DECNET
ODW_AUI           2906   VNOAP1         Y   100  300   -1    -1    DECNET
ODW_ZUO           2903   RPRT01         Y   100  300   -1    -1    DECNET
*
*
%EOS
*
%ROUTING   * initial routing table
*
* Target     Route-Thru
* Group       Group
* ------     ----------
*     1            2
*     2            4
*     3            4
*     4            2
*     5            4
*     6            2
*     7            4
*
%EOS
*
%CLS    **** Client Lib Server Configuration Table ****
*
*                              Maximum #
*  Endpoint     Transport      of Clients
*  --------     ---------      ----------
*    5000         TCPIP             16
*    5001         TCPIP             16
*    6000         DECNET            32
*
%EOS
*
%BUFFER      ***** Buffer Pool Configuration Table *******
*                                                        Reserve
*Msg-Block-Type  Byte-Size    Number    Warning-level     Count
*--------------  ---------    ------    -------------    ---------
SMALL               1000        600           20           60
MEDIUM             10000        100           20            5
LARGE              32000        100           20            1
%EOS
*
*
%QCT         ***** Queue Configuration Table ******
*
*                         ---Pool Quota---  UCB   Q    Q  Confrm Perm Name Check
*    Queue Name       Num  Bytes  Msgs Ctrl Send Type Own  Style Act  Scope ACL
*------------------- ---- ------- ---- ---- ---- ---- ---- ----- ---- ----- ----
IMS_MASTER_OVP          1  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_UPD_UK              2  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_UK1             3  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_UK2             4  1000000  .  byte   .   P    .     .    N     L    N
OVP_SQL_UK              5  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_UPD_NOR             6  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_NOR1            7  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_NOR2            8  1000000  .  byte   .   P    .     .    N     L    N
OVP_SQL_NOR             9  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_UPD_GY              10  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_GY1             11  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_GY2             12  1000000  .  byte   .   P    .     .    N     L    N
OVP_SQL_GY              13  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_UPD_SMA             14  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_SMA1            15  1000000  .  byte   .   P    .     .    N     L    N
OVP_BLD_SMA2            16  1000000  .  byte   .   P    .     .    N     L    N
OVP_SQL_SMA             17  1000000  .  byte   .   P    .     .    N     L    N
*
ODW_CLIENT_REO          18  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_RTO          19  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_FNO          20  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_EVO          21  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_CDG          22  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_BRO          23  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_ZUO          24  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_AUI          25  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_UTO          26  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_BPS          27  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_IYO          28  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_SQO          29  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_DBO          30  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_ISO          31  1000000  .  byte   .   P    .     .    N     L    N
ODW_CLIENT_XIP          32  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_IPE_CL1_DP          33  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_LD1_DP          34  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_FL1_DP          35  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_SH1_DP          36  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_IT1_DP          37  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_NRVNUE_DP       38  1000000  .  byte   .   P    .     .    N     L    N
OVP_IPE_REF_DP          39  1000000  .  byte   .   P    .     .    N     L    N
*
OVP_AYO_DP              40  1000000  .  byte   .   P    .     .    N     L    N
*
*           Test ODW Servers 
*           ----------------
ODW_BCKLG_RTO_DP        41  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_REO_DP        42  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_FNO_DP        43  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_EVO_DP        44  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_CDG_DP        45  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_BRO_DP        46  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_ZUO_DP        47  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_AUI_DP        48  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_UTO_DP        49  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_BPS_DP        50  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_IYO_DP        51  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_SQO_DP        52  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_ISO_DP        53  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_XIP_DP        54  1000000  .  byte   .   P    .     .    N     L    N
ODW_BCKLG_DBO_DP        55  1000000  .  byte   .   P    .     .    N     L    N
*
*  SBS Server uses the following UCB numbers for Optimized Delivery
*
sbs_eth_control        74       0    0   .    E    .    .    .    .     L    N
sbs_eth_chan1          75       0    0   .    E    .    .    .    .     L    N
sbs_eth_chan2          76       0    0   .    E    .    .    .    .     L    N
*                                            
*
*  Queues 90-100 & 150-199 are reserved for DECmessageQ utilities
temporary_q             0   64000  100   .    .    .    .    .    .     L    N
screen_process          0   64000  100   .    .    .    .    .    .     L    N
spare1                 90  100000  100   .    .    .    .    .    Y     L    N
all_ucbs               91       0    0   .    .    .    .    .    .     L    N
timer_queue            92       0    0   .    .    .    .    .    .     L    N
null                   93       0    0   .    .    .    .    .    .     L    N
internal1              94   64000  100   .    .    .    .    .    Y     L    N
qtransfer_server       95 1000000 1000 None   .    .    .    .    N     L    N
dead_letter_queue      96   64000  100   .    .    .    .    .    Y     L    N
mrs_server             98 1000000 1000 None   .    .    .    .    N     L    N
sbs_server             99 1000000 1000 None   .    .    .    .    N     L    N
avail_server           99 1000000 1000 None   .    .    .    .    N     L    N
com_server            100 1000000 1000 None   .    .    .    .    N     L    N
declare_server        100 1000000 1000 None   .    .    .    .    N     L    N
connect_server        100 1000000 1000 None   .    .    .    .    N     L    N
queue_server          100 1000000 1000 None   .    .    .    .    N     L    N
pams_transport        100 1000000 1000 None   .    .    .    .    N     L    N
dmq_loader            150  250000  100   .    .    .    .    .    N     L    N
dcl_by_q_name         151       0    0   .    .    .    .    .    .     L    N
tcpip_ld              152 1000000 1000 None   .    .    .    .    N     L    N
decnet_ld             153 1000000 1000 None   .    .    .    .    N     L    N
reserved_ld           154 1000000 1000 None   .    .    .    .    N     L    N
event_logger          155 1000000 1000 None   .    .    .    .    N     L    N
jrn_server            156 1000000 1000 None   .    .    .    .    N     L    N
mrs_failover          157       0    0   .    .    .    .    .    N     L    N
dmq_fulltest_pq       191  250000  100   .    .    .    .    .    N     L    N
dmq_fulltest_sq       192  250000  100   .    .    S  191    .    N     L    N
example_q_1           193   64000  100   .    .    .    .    .    N     L    N
example_q_2           194   64000  100   .    .    .    .    .    N     L    N
IVP_unowned_sq        195  250000  100   .    .    S    .    .    N     L    N
IVP_private_MOT1     4999       0    0   .    .    .    .    .    .     L    N
IVP_universal_MOT1   5001       0    0   .    .    .    .    .    .     L    N
*
%EOS
*
*                           
%SBS   ******* SBS Server Initialization Section ************
*
MOT_MODE  DMQ           ! DMQ or ETH primary service
MOT_LOW   4800          ! 4000 - 4900
MOT_MID   5000          ! must be 5000
MOT_HIGH  5200          ! 5100 - 6000
*
*ETH_DEVICE  ESA0:      ! VMS device name of the Ethernet board
*
*    <<<<<<<<<<<<<<<<<< Warning >>>>>>>>>>>>>>>>>>>
* The protocol and Ethernet addresses show below are not registered
* and are not guaranteed to not cause a conflict.  Use them with
* discretion.
*
*          |------ MCA ----|  |Prot #|  |- UCB queue -|
*CNTRL_CHAN AB-AA-34-56-78-90   81F0         74
*
*     |------ MCA ----|  |Prot #|  |- UCB queue -|
*SET 0 AB-12-34-56-78-90   81F1         75
*MAP 5101 0             ! map a MOT to an Ethernet channel
*MAP 5102 0             ! map a MOT to an Ethernet channel
*
*     |------ MCA ----|  |Prot #|  |- UCB queue -|
*SET 1 AB-12-34-56-78-92   81F2         76
*MAP 5103 1             ! map a MOT to an Ethernet channel
*
%EOS
*
*
%MRS   ******* MRS/JRN Servers Initialization Section ************
*
AREA_SIZE             512  ! disk blks per file (min:128, max:16384, def:512)
NUM_DQF_AREAS        1000  ! min:100, max:1000000,    default:1000
NUM_SAF_AREAS        1000  ! min:0,   max:1000000,    default:1000
NUM_PCJ_AREAS        1000  ! min:0,   max:1000000,    default:1000
NUM_DLJ_AREAS        1000  ! min:0,   max:1000000,    default:1000
NUM_MESSAGES          512  ! min:128, max:2147483647, default:512
NUM_QUEUES            128  ! min:128, max:2147483647, default:128
CACHE_PERCENTAGE       90  ! % rcv msg quota for MRS msgs (min:1, max:100, def:90) 
USE_HIGH_WATER_MARK   YES  ! checkpt MRS sizing params to disk (YES/NO)
LOAD_MRS_CTRS         YES  ! init recoverable msg ctrs on startup (YES/NO)
RCVR_ONLY_CONFIRM     YES  ! limit msg confirms to receiving process (YES/NO)
XGRP_JRN_CTRL          NO  ! allow JRN cntrl msgs from other groups (YES/NO)
REDELIVERY_TIMER       10  ! integer seconds (min:0, max:5000, default:10)
*
PCJ_FILENAME    DMQ$MRS:MRS_%bg.PCJ  ! char[64]	- %bg is a macro that
DLJ_FILENAME    DMQ$MRS:MRS_%bg.DLJ  ! char[64] - expands to bus_group
*
%EOS
*
%GNT ********* Group Name Table Section *********************
*
*        Queue Name                   Queue Addr     Scope
*------------------------------       ----------     -----
*global_queue1                             1.234       G
lcl_queue1                                   134       L
lcl_queue2                                   135       L
lcl_queue3                                   136       L
*
%EOS
*
%END

T.RTitleUserPersonal
Name
DateLines
2850.1Need more info...KLOVIA::MICHELSENBEA/DEC MessageQ EngineeringThu Apr 17 1997 12:2030
>Could someone explain the error message I am getting while running the IQ
>option within the Link Management section of the DMQ menu.

>While attempting to get the status of any external group linkage I get the
>message
>	"Received message is larger than the user message area".

	  Please get a header trace of the message that causes this problem
	since it works here.


>At least one of the nodes I an trying to communicate with is DMQ
>v3.2A-22(3222) on VaxVMS 6.1 Other nodes are running at least 
>this level or older versions of DMQ and VaxVMS. While doing a LOOP test to
>this remote group I also set a
>	"Data Verification error".


	  Please be specific as to a combination that causes this problem
	plus the answers given to DMQ$LOOP.  Also, please examine the log
	files on both sides of the link to see if the messages are being
	lost or other errors.  One thing that occurs to me that could cause
	this behavior is mismatched large buffer sizes and you attempted to
	loop a message larger than the remote system can handle and was 
	truncated.  This problem should show up in the log file and it
	will be handled better in the V4.0 release.



	Marty
2850.2Log files + detailsAYOV29::LTALBOTThu Apr 17 1997 13:5269
Marty,

As requested I have concentrated on two groups only. My Alpha node has 
OpenVMS 6.2-1HR and DMQ v3.2-13(3213) the remote node I am testing with is
VaxVMS 6.1 and DMQ v3.2A-22(3222). This remote node is group 2961 (ODW_REO).

The attached event log file relates to when I did a loop test to 2961 and got
a stream of messages
*** Data verification error, word # 399 sent=399 rcvd=0 ***

Immediately after this test I did an IQ within Link Management and got the message
"Received message is larger than the user massage area"

The remote group (2961) had nothing in its log file.

I got the remote group to perform the same loop test and IQ enquiry back into
my node. Both were successful with no problems.

I set the trace at DCL and ran dmq$mgr_utility.exe direct. I repeated the IQ
option. The screen grab is below.

 Enter Group Number or <CR> to Return : 2961
DmQ T 44:05.0 Time Stamp - 17-APR-1997 14:44:05.00
DmQ T 44:05.0 =========================== PAMS Header V2.4 ===================================
DmQ T 44:05.0   Target=    66.100   Source=    66.276    Alt-tgt=    66.155 Size-data=     351
DmQ T 44:05.0  Org-tgt=    66.100  Org-src=    66.276   Ntfy-tgt=    66.276 Owner-PID=20200496
DmQ T 44:05.0    Class=        29     Type=      -975 Alloc-flag=         1  Priority=       0
DmQ T 44:05.0 Del-Mode=  00000440  DIP-sts=  00000001  Visit-cnt=        16  Prev-grp=       0
DmQ T 44:05.0   Org-DM=  00000040  UMA-sts=         0    Timeout=       300    Endian=       0
DmQ T 44:05.0 Msg-flgs=  00000000 CRC[H:D]= 0000:0000   Seg[C:N]=       0:0        RP=       0
DmQ T 44:05.0  Seq-num=  01140042     ROSN=  00000000        LRA=  00000000     Links=00000000
DmQ T 44:05.0            00000007            7754E85C              00000000           00000000
DmQ T 44:05.0 =========================== PAMS Header V2.4 ===================================
DmQ T 44:05.0   Target=    66.276   Source=      66.1    Alt-tgt=       0.0 Size-data=     552
DmQ T 44:05.0  Org-tgt=    66.276  Org-src=      66.1   Ntfy-tgt=      66.1 Owner-PID=20200496
DmQ T 44:05.0    Class=        50     Type=        50 Alloc-flag=         1  Priority=       0
DmQ T 44:05.0 Del-Mode=  00000141  DIP-sts=  00000001  Visit-cnt=        16  Prev-grp=       0
DmQ T 44:05.0   Org-DM=  00000141  UMA-sts=         0    Timeout=       300    Endian=       0
DmQ T 44:05.0 Msg-flgs=  00000000 CRC[H:D]= 0000:0000   Seg[C:N]=       0:0        RP=       0
DmQ T 44:05.0  Seq-num=  00010042     ROSN=  00000000        LRA=  00000000     Links=FFFFEE80
DmQ T 44:05.0            0001CA07            33563543              00000000           FFF95150

Bus:112  Group:66      DECmessageQ Manager Utility     Thu Apr 17 14:44:05 1997


                         *** Error returned by PAMS_GET_MSGW
                            %PAMS-E-AREATOSMALL, Received message is larger than the user's message area



Extract from EVL log file while doing Loop test and IQ option.

COM_SERVER      17-APR-1997 12:46:40.88 I Connect confirm from node BRSEIS is protocol V2.3
COM_SERVER      17-APR-1997 12:46:40.88 I Received a connect confirm from node BRSEIS group 2810 (ODW_BRO)
COM_SERVER      17-APR-1997 14:01:33.11 I Received a request to reset the counters at 17-APR-1997 14:01:33
COM_SERVER      17-APR-1997 14:02:12.23 F DECnet communications lost to message queue group 2961 (ODW_REO)
COM_SERVER      17-APR-1997 14:02:12.23 F %SYSTEM-F-LINKABORT, network partner aborted logical link
COM_SERVER      17-APR-1997 14:02:34.58 I Accepted connect from system REPROT for group 2961 (ODW_REO)
COM_SERVER      17-APR-1997 14:02:55.14 I Received a request to reset the counters at 17-APR-1997 14:02:55
COM_SERVER      17-APR-1997 14:05:07.59 F DECnet unknown communications error for message queue group 2836 (ODW_BPS)
COM_SERVER      17-APR-1997 14:05:07.59 F %SYSTEM-F-TIMEOUT, device timeout
COM_SERVER      17-APR-1997 14:10:08.46 F DECnet communications lost to message queue group 2961 (ODW_REO)
COM_SERVER      17-APR-1997 14:10:08.46 F %SYSTEM-F-LINKABORT, network partner aborted logical link
COM_SERVER      17-APR-1997 14:10:25.20 I Accepted connect from system REPROT for group 2961 (ODW_REO)


Regards,
Les

2850.3Look at 66.1...KLOVIA::MICHELSENBEA/DEC MessageQ EngineeringFri Apr 18 1997 13:107
re: .2

  It looks like one of your processes is, for some reason, continuing to
send messages to a temporary queue after it has been recycled.


Marty
2850.4Note 66 ???AYOV29::LTALBOTFri Apr 18 1997 13:495
    Sorry Marty I think you are asking me to look at note 66? What is the
    connection between 66 and my question?
    
    Les                                                     
    
2850.5I ment *queue* 66.1...KLOVIA::MICHELSENBEA/DEC MessageQ EngineeringFri Apr 18 1997 17:179
re: .4

  The header trace indicates that a message from queue 66.1 caused the
DMQ$MGR_UTILITY to have problems.  This program only have dialogues with the
queues 98, 100 & 155, specifically this request goes to the COM Server
(Q-100).


Marty
2850.6IQ Now WorkingAYOV29::LTALBOTMon Apr 21 1997 14:0413
    Marty,
    Yes I get the dummy of the year award for not picking 66.1 as a que!
    
    I have stopped the process on que 66.1 and everything is working OK.
    
    I am taking to the (MSG+) IMS support people as 66.1 is the IMS Master
    server.
    
    Thanks for your help,
    
    Regards,
    Les