[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rdgeng::cics_technical

Title:Discussion of CICS technical issues
Moderator:IOSG::SMITHF
Created:Mon Mar 13 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:192
Total number of notes:680

183.0. "CICS in a DECsafe configuration" by NNTPD::"ricardo.lopez@sqo.mts.dec.com" (Ricardo Lopez Cencerrado) Tue Apr 22 1997 16:09

Hello,

I am currently involved in the configuration of a CICS using DECsafe ASE to
provide
failover functionality. We are planning to use DUnix 3.2G, CICS 2.1A, Oracle
7.1.6
and a DCE-lite configuration with DCE 1.3B.

The users of this system will access it using the EPI client for CICS under
DUnix
to access a financial application running under CICS. The system will have to 
support further development, hence there will be at least one production
region and
two development regions running on the system.

I would like to know if there is specific documentation on configuring CICS in
a
DECsafe environment, apart from the entries 103 and 118 in this conference ( I
guess
entry 79 does not apply to a DCE-Lite configuration?).

I am specially interested in the SFS configuration. The SFS and CICS region
are in
the same host( no remote SFS) in a simple single machine CICS system. Should
the
SFS volumes be set up as an ASE disk service or have each system separate SFS
volumes?
Is it possible and does it make sense to have more than one SFS server in this
system?

Entry 118.3 indicates how to configure the bindings file using to entries with
an
ipalias. Is the appropiate setup for the fail-over configuration ?

Are there any failover-time issues regarding the DCE-Lite configuration?

From 103.3 and 118.3 I understand it is possible to define both the /var/cics*
,
/var/dce_local* and the SFS data and log volumes as an ASE disk service. Would
this be the easiest configuration for a single host CICS configuration with 
failover?

The customer does not use VSAM files, journals or permanent storage on SFS. He
always
coldstarts the regions and is not planning to backup SFS, thinking no valuable
data is being kept in the SFS volumes. Is this theory correct or should the
SFS
be backed up?

Please, let me know if any more info is needed.


Thanks in advance.


Ricardo.
[Posted by WWW Notes gateway]
T.RTitleUserPersonal
Name
DateLines
183.1Some help/hints...CICS03::helenHelen PrattTue Apr 22 1997 18:1276

>>The users of this system will access it using the EPI client for CICS under
>>DUnix
>>to access a financial application running under CICS. The system will have to 
>>support further development, hence there will be at least one production
>>region and
>>two development regions running on the system.

Are there other applications to be run on this pair of systems?  The
reason I ask is that it may make sense to use one system for developement and
one for production with the scope to failover between the two - just a 
thought.

>>I would like to know if there is specific documentation on configuring CICS in
>>a
>>DECsafe environment, apart from the entries 103 and 118 in this conference ( I
>>guess
>>entry 79 does not apply to a DCE-Lite configuration?).

There is no documentation specifically about configuring CICS in a DECsafe
environment.

>>I am specially interested in the SFS configuration. The SFS and CICS region
>>are in
>>the same host( no remote SFS) in a simple single machine CICS system. Should
>>the
>>SFS volumes be set up as an ASE disk service or have each system separate SFS
>>volumes?

In order to have true failover of your SFS, you must have the SFS volumes
under an ASE disc service.  

>>Is it possible and does it make sense to have more than one SFS server in this
>>system?

Each region should always work with the same SFS, whichever system it is running
on.  You might want to consider having two separate SFS's, one for the 
production region and one for the developement region/s, this would give
your production region greater stability.

>>Entry 118.3 indicates how to configure the bindings file using to entries with
>>an
>>ipalias. Is the appropiate setup for the fail-over configuration ?

Yes I believe it is.

>>Are there any failover-time issues regarding the DCE-Lite configuration?

Not that I'm aware of.

>>From 103.3 and 118.3 I understand it is possible to define both the /var/cics*
>>,
>>/var/dce_local* and the SFS data and log volumes as an ASE disk service. Would
>>this be the easiest configuration for a single host CICS configuration with 
>>failover?

Yes - I believe it is.

>>The customer does not use VSAM files, journals or permanent storage on SFS. He
>>always
>>coldstarts the regions and is not planning to backup SFS, thinking no valuable
>>data is being kept in the SFS volumes. Is this theory correct or should the
>>SFS
>>be backed up?

You might want to have the customer rethink this.  If a region terminates
unexpectedly and there are in flight transactions they will not be tidied
up if the region is cold started.  To tidy up in flight transactions you
must auto start the region, this uses information maintained in the SFS.

Hope this helps,

Helen.

[Posted by WWW Notes gateway]
183.2cicssfs problem in DECSafe environmentNNTPD::"ricardo.lopez@sqo.mts.dec.com"Ricardo Lopez CencerradoTue Apr 29 1997 19:39140
Hello,

	I am currentry trying to configure an SFS server so that it can be moved as
part of an ASE disk service. In order to accomplish this, I think it is
required
that the LSM volumes used by SFS are in a disk group other than rootdg (
rootdg is
not a valid disk group to use in an ASE disk service ?).

	I have created the required volumes with appropiate names using dxlsm on 
the system, modified its group/user as required and run them throught
cicssfsconf.
However, when doing the first cicssfs coldstart I get Encina error messages.

<<<<
ERZ036126I/0411: Directory '/var/cics_servers/archives/devel' created
   1 04537 97/04/29:13:25:25.945744 a0042c27 W  Call to vol_InitializeDisk
faile
d with status ENC-vol-0016
ERZ036203E/0471: Unable to create logical volume 'log_Sdevel' for server
'/.:/ci
cs/sfs/devel'
>>>>

	All these happens when trying to use LSM volumes in a disk group other than
rootdg. A previous default installation (using rootdg) run without problems
and
I guess Encina is trying to find its required LSM volumes in rootdg and cannot
find
them. 

	If this is what happened, is there some way to configure the LSM volumes and
get SFS to search them in a disk group other than rootdg ? is other disk
configuration
better ? or simply, what am I getting wrong ?


	This request is urgent, and I would certainly appreciate a fast answer to
this problem. Any required information can be posted or send by email.


	Thanks in advance,


	Ricardo.


	The following is a transcript of the sfs configuration steps:
.
.
.
Finished adding user account for (Sdevel).
# volprint -g ase_dev
TYPE NAME         ASSOC        KSTATE     LENGTH COMMENT
dg   ase_dev      ase_dev      -               - 
dm   rza40        rza40        -        16755016 
dm   rzb40        rzb40        -        25133044 
dm   rze40        rze40        -         8376988 
sd   rza40-01     vol01-01     -        16755016 
sd   rzb40-01     vol02-01     -        25133044 
sd   rze40-01     sfs_Sdevel-01 -          524288 
sd   rze40-02     log_Sdevel-01 -          524288 
sd   rze40-03     DCE-01       -         2048000 
plex DCE-01       DCE          ENABLED   2048000 
plex log_Sdevel-01 log_Sdevel   ENABLED    524288 
plex sfs_Sdevel-01 sfs_Sdevel   ENABLED    524288 
plex vol01-01     vol01        ENABLED  16755016 
plex vol02-01     vol02        ENABLED  25133044 
vol  DCE          fsgen        ENABLED   2048000 
vol  log_Sdevel   gen          ENABLED    524288 
vol  sfs_Sdevel   gen          ENABLED    524288 
vol  vol01        fsgen        ENABLED  16755016 
vol  vol02        fsgen        ENABLED  25133044 
>>> the volumes are already there -> set user and group first
# voledit set user=Sdevel group=cics log_Sdevel
# voledit set user=Sdevel group=cics sfs_Sdevel
>>> now create a new sfs
# ps -ef | grep dce
root      3881  2622  0.0 13:19:42 ttyp3        0:00.00 grep dce
# cicscp start dce
ERZ096002I/0003: cicscp command completed successfully
# cicscp start dce
ERZ096002I/0003: cicscp command completed successfully
# ps -ef | grep dce
root      3905     1  0.0 13:19:50 ??           0:00.02 /opt/dcelocal/bin/rpcd
root      3908  2622  0.0 13:19:53 ttyp3        0:00.00 grep dce
# cicssfscreate -?
ERZ038194I/0129: Usage: cicssfscreate { -? | [-v] [-I] [-S] [-m modelId]
serverN
ame [[attributeName=attributeValue]...] }
# cicssfscreate -v -S -I /.:/cics/sfs/devel ShortName=Sdevel UserID=Sdevel
ERZ038037I/0108: Server '/.:/cics/sfs/devel' added to SSD
ERZ010130I/0336: Creating subsystem 'cicssfs.Sdevel'
ERZ058043I/0606: Subsystem 'cicssfs.Sdevel' successfully added to the cicssrc
da
tabase cicssfs.Sdevel
ERZ038038I/0112: Server '/.:/cics/sfs/devel' added as a subsystem
# echo "ok up to now"
ok up to now
# pwd
/var
# cd cics_servers
# cat server_bindings
/.:/cics/sfs/devel      ncacn_ip_tcp:capsu001[5550]
# env | grep ENCINA
ENCINA_BINDING_FILE=/var/cics_servers/server_bindings
ENCINA_ROOT=/opt/encina
# echo "environment variable ok also"
environment variable ok also
# ps -ef | grep dce
root      3905     1  0.0 13:19:50 ??           0:00.02 /opt/dcelocal/bin/rpcd
root      4145  2622  0.0 13:24:02 ttyp3        0:00.00 grep dce
# cicssfs -?
ERZ038232I/0539: Usage: cicssfs { -? | [-t traceMask] [[-T
traceDestination]...]
 [ -a] [-I] [serverName] [[attributeName=attributeValue]...]}.
# cicssfs /.:/cics/sfs/devel StartType=cold
ERZ038214I/0510: Authorization for server '/.:/cics/sfs/devel' has been set to
'
none'
ERZ038216I/0519: Subsystem 'cicssfs.Sdevel' has been initialized.
ERZ038219I/0523: Server '/.:/cics/sfs/devel' is responding to RPCs.
ERZ036126I/0411: Directory '/var/cics_servers/archives/devel' created
   1 04537 97/04/29:13:25:25.945744 a0042c27 W  Call to vol_InitializeDisk
faile
d with status ENC-vol-0016
ERZ036203E/0471: Unable to create logical volume 'log_Sdevel' for server
'/.:/ci
cs/sfs/devel'
ERZ036152E/0468: Encina server '/.:/cics/sfs/devel' started, but has not been
in
itialized

>>> the message in /var/cics_servers/SSD/cics/sfs/devel is:
   1 04425 97/04/29:13:25:23.329888 502c3448 A  Ready for additional
configuration
 ... Tue Apr 29 13:25:23 1997

>>>> END.
[Posted by WWW Notes gateway]
183.3CICS_LOG_VG and CICS_SFS_VGCICS03::helenHelen PrattWed Apr 30 1997 12:1224
Ricardo,


>>I am currentry trying to configure an SFS server so that it can be moved as
>>part of an ASE disk service. In order to accomplish this, I think it is
>>required that the LSM volumes used by SFS are in a disk group other than 
>>rootdg ( rootdg is not a valid disk group to use in an ASE disk service ?).

Yes, the SFS volumes must be in a disk group other than rootdg.  

Before you create the SFS using cicssfscreate, you must set the following
two environment variables to the name of the disk group containing
the volumes the SFS is to use:

	CICS_LOG_VG=<name of LSM disk group>
	CICS_SFS_VG=<name of LSM disk group>

This will allow you to use a disk group other than rootdg.

Hope this helps,

Helen.

183.4Thanks, it works fineNNTPD::&quot;ricardo.lopez@sqo.mts.dec.com&quot;Ricardo Lopez CencerradoTue May 13 1997 00:479
Thanks for your quick answer, and sorry for taking so long to reply.

After testing it the use of this two variables has allowed to create the sfs
volumes on the appropiate systems.

Thanks again.

Ricardo.
[Posted by WWW Notes gateway]