[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rocks::dec_edi

Title:DEC/EDI
Notice:DEC/EDI V2.1 - see note 2002
Moderator:METSYS::BABER
Created:Wed Jun 06 1990
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:3150
Total number of notes:13466

3034.0. "Excessive page locking" by UTRTSC::SMEETS (Workgroup support) Fri Feb 28 1997 09:48

Good morning,

A customer asked for a configuration which could handle 3500-4000 document (10K)
in 1 hour both incoming and outgoing. 40,000 documents which are available at
at time should be processed within 10 hours., i.e. inbound document processed,
applicationn processed by the application and then process the outbound 
document. All this for ONE partner.

So Digital adviced A VAX 7830(127 VUP, 3 processors) with 512 MB memory and a
raid set of 8 Gb.

Software is DEC/EDI V2.1C VAX/VMS with Oracle RDB V6.1ECO4 and VAX/VMS V6.2.

Both the AUDIT and the ARCHIVE databases have been tuned for these values, also
global buffering has been enabled.

In order to test this configuration. We placed some 3,000 transmission files in
the IMPEXP gateway and started the gateway.

After the first 100-150 document processed we noticed an increasing delay before
the TF's are splitted. The process DECEDI$TFS is almost always in LEF state.
TF are in the status AWAITING_TRANSLATION.

Looking at the machine via DECps. We noticed no I/O, MEM and CPU bottlenecks.

The only thing DECps was complaining about was excessive locking.

Looking via the RDB monitor we saw huge amounts of page locking..

Today a Oracle RDB specialist will look at the system.

My question: Can the trafic as specified by the customer be handled with this
configuration.

Can someone explain why the page looking occurs, maybe due to the fact that all
TF have almost the same name....

Martin
T.RTitleUserPersonal
Name
DateLines
3034.1FORTY2::DALLASPaul Dallas, DEC/EDI @REO2-F/E2Fri Feb 28 1997 12:3815
    Martin,
    
    What EDI standard is used? If this is EDIFACT then there is an
    improvement in efficiency in version V2.1D of DEC/EDI. Basically when
    the transmission file was split, the TFS notified the Translator of the
    TRANSMISSION FILE NAME. This reduced traffic between the TFS and TRNS, 
    but increased the traffic to the database, since both the TFS and TRNS
    were trying to lock records in the Audit database. In V2.1D, the
    individual document ids are passed to the TRNS, increasing the internal
    traffic but reducing database activity and decreasing locking. 
    
    If EDIFACT is being used, it may be worth upgrading to V2.1D, when it
    ships on VAX (next month I believe). It's already shipping on Alpha. 
    
    P.
3034.2METSYS::THOMPSONFri Feb 28 1997 22:146
Hi,

Whoever called on Friday - sorry I was out!
(dtn: 838 3999).

M
3034.3the story continuesUTRTSC::SMEETSWorkgroup supportSat Mar 01 1997 22:3539
Hi Paul and Mark,


>> Whoever called on Friday - sorry I was out!
>> (dtn: 838 3999).

It was probably someome from our local escalation management. I was friday
on site and time was running out of us. Monday March 3th the customer had to
prove he could handle a load of 40000 documents in 10 hours.

Due to problems as described in the base note we needed some clarifaction from
engineering. was the problem we'd encountered solved in V2.1D.

Alas in V2.1D the occuring of Deadlocks between the TFS and the TRNS had been
minimized, but we we're looking at deadlock between the IMPEXP and the TFS.

The customer had some 1500 documents in the IMPEXP directory all for the some 
connection and all for the same partner.

As part of the IMPEXP proces Transmission file details are putted in the AUDIT
database (CLF_TABLE) and the index is updated. The index is based on several 
fields among other the creation date (Quadword). For all 1500 TF's the only 
different field is the creation date. The creation date has a resolution up to
an one hundreds of a second.

But due to the fact that the machine is so fast the hashing algorithm will 
very often fail which will lead to deadlock situations (10 seconds timeout) as
seen between the IMPEXP process and the TFS proces.

As a workaround we now put 500 documents in one transmission file. 

As discussed with Paul Dallas friday afternoon, the hashing algorithm needs to
be changed in order to handle fast machines with large amounts of files.

Tuesday I'll be back in office and will raise an IPMT case for this.

Martin

p.s. Paul, thanks a lot for your cooperation last friday.
3034.4ReproducibleUTRTSC::SMEETSWorkgroup supportTue Mar 04 1997 13:5918
Hi Paul and Mark,

The problem is reproducible on our inhouse test system (V2.1C +. RDB V5.1-0) 
(MicroVAX 3100-80, 48 MB), so the problem is not "speed" related.

I've created 100 (identical) TF's, all for the same connection and the same
partner in the IMPEXP directory.

Then I started the connection and watch the DECEDI$AUDIT_DB via
RMU/SHOW STAT DECEDI$AUDIT_DB. In the menu I choose for the option "Active user
Stall Messages" 

There I could see that the IMPEXP and the TFS process very frequently cause 
deadlock situation which last 10 seconds per deadlock.

This has a huge impact on the throughput.

Martin
3034.5DEADLOCK_WAITMETSYS::NELSONDavid, http://samedi.reo.dec.com/Tue Mar 04 1997 14:504
    
    Have you tried to change the default deadlock timeout value?
    
    Change DEADLOCK_WAIT to something else.
3034.6DEADLOCK_WAIT doesn't solve the initial problemUTRTSC::SMEETSWorkgroup supportTue Mar 04 1997 15:2913
Hi David,


>> Have you tried to change the default deadlock timeout value?
>> Change DEADLOCK_WAIT to something else.

Yes, we've changed the DEADLOCK_WAIT to 5 sec and indeed the throughput 
increased but lowering the DEADLOCK_WAIT value doesn't solve the initial problem
of the occurence of the high rate of deadlocks.

Lowering the DEADLOCK_WAIT value just reduces the impact of the problem.

Martin