[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference decwet::advfs_support

Title:AdvFS Support/Info/Questions Notefile
Notice:note 187 is Freq Asked Questions;note 7 is support policy
Moderator:DECWET::DADDAMIO
Created:Wed Jun 02 1993
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1077
Total number of notes:4417

1026.0. "Different free space issue" by NETRIX::"mcdonald@decatl.alf.dec.com" (John McDonald) Tue Mar 25 1997 22:33

I have a customer who is experiencing some problems with advfs
regarding disk space usage and output from df/du. I've looked
at all of the other notes that have references to this problem,
but I don't see anything quite like this.

The system is a 2100 running DU 4.0b (no patches). Hostname
is 'howler'. It is used as a web server/mail server system.

This first output shows the df output for three filesets in a
single domain:


howler# df -k -t advfs
Filesystem                1024-blocks        Used   Available Capacity 
Mounted on
raid1#webdata2                4110336     3513266      597070    86%   
/webdata2
raid1#online1                 4110336      412927     1025280    29%   
/online1
raid1#online2                 4110336      446408     1025280    31%   
/online2

He then does a du and sees the following:

howler# du -kxs /webdata2 /online1 /online2
1444505 /webdata2
413004  /online1
446481  /online2

I then asked him to run vquotacheck (as stated in another note
entry). He ran it three times on the first fileset and saw the following:

howler# vquotacheck -v raid1#webdata2
*** Checking user and group quotas for raid1#webdata2 (/webdata2)
128      fixed: inodes 4 -> 6   blocks 482970 -> 954666
nobody   fixed: inodes 11534 -> 11522   blocks 3067275 -> 473971
amazon   fixed: inodes 3914 -> 3916     blocks 476917 -> 1001012
nobody   fixed: inodes 8372 -> 8360     blocks 2971108 -> 377820
howler# vquotacheck -v raid1#webdata2
*** Checking user and group quotas for raid1#webdata2 (/webdata2)
nobody   fixed: blocks 474107 -> 474115
nobody   fixed: blocks 377956 -> 377964
howler# vquotacheck -v raid1#webdata2
*** Checking user and group quotas for raid1#webdata2 (/webdata2)

When he ran it on the second fileset, he saw the following:

howler#  vquotacheck -v /online1
*** Checking user and group quotas for raid1#online1 (/online1)
nobody   fixed: inodes 69 -> 72 blocks 1813 -> 1858
amazon   fixed: inodes 7095 -> 7098     blocks 168950 -> 168995
howler#  vquotacheck -v /online1
*** Checking user and group quotas for raid1#online1 (/online1)

The third fileset showed no problems as follows:

howler# vquotacheck -v /online2
*** Checking user and group quotas for raid1#online2 (/online2)

Running vquotacheck seems to fix everything up.
The problem is that this is occurring on a regular basis. He has to
run vquotacheck on a regular basis to clean things up, and that's
not acceptable as this is their primary server. Is this a known
problem in 4.0b? I've checked for patches and I couldn't
find any that seem to apply.

Any ideas?

thanx.
John McDonald



[Posted by WWW Notes gateway]
T.RTitleUserPersonal
Name
DateLines
1026.1One more thing...NETRIX::"mcdonald@decatl.alf.dec.com"John McDonaldWed Mar 26 1997 11:1423
I was pointed to the release notes re: the following paragraph:

Under certain conditions, the disk usage information on an AdvFS file system 
may become corrupted. To correct this, turn on quotas in the /etc/fstab file 
for the affected file system, and then run the vquotacheck command on the 
file system. This should correct the disk usage information. 

Unfortunately, this is happening to the customer several times a week on
a 24X7 production system. The manuals say that to run vquotacheck, the
filesets have to be 'quiescent', so the customer plays it safe and
dismounts them before running it. Would running it on a mounted fileset
cause ANY problems?

Also, with the messed up free space list, he is actually hitting a hard
space limit (i.e. 100% full) on some filesets, which causes his apps to
hang up, while other filesets show plenty of space in the domain.

If our answer is 'live with it', I think I'd like to have someone from
product management call him up and explain it!

John McDonald
Atlanta CSC
[Posted by WWW Notes gateway]
1026.2same problem!!!!TRN02::ALMONDOQuid ut UNIX ?Thu Mar 27 1997 05:1413
    
     I have the same problem in a 24GB domain that is supposed to
     work in a 24x7 mode.
    
     To add some complexity, the file systems are not in fstab file but
     are owned by a DECsafe service......
    
     We hope in a better (and on-line) solution to this problem.
    
     Regards,
     Mario
    
     
1026.3please file a CLDDECWET::DADDAMIODesign Twice, Code OnceThu Mar 27 1997 16:145
    Could both of the people who posted the problems here please file CLDs
    for their customers? We have been working on some quota problems, but
    they don't appear to be the same as this one. Also filing a CLD will
    keep you in the loop for getting patches as soon as possible for the
    customers.
1026.4CLD = QAR?NETRIX::"mcdonald@decatl.alf.dec.com"John McDonaldThu Mar 27 1997 18:596
By a CLD, do you mean a QAR?

John McDonald
Atlanta CSC

[Posted by WWW Notes gateway]
1026.5DECWET::DADDAMIODesign Twice, Code OnceThu Mar 27 1997 19:353
    No, if it's for a customer problem you need to go through IPMT. That
    way a patch will be produced if possible. A QAR is considered internal.
    The IPMT entry ends up being a CLD.