Recently got an issue in reducing jfs2 filesystem  with osverion 6.1 and have enough space to reduce filesystem.
root@umaix /tmp>df -g /orafs1 
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/oralv1   100.00    75.00   25%      555     1%  /orafs1

root@umaix /tmp>chfs -a size=-15G /orafs1
chfs: There is not enough free space to shrink the file system.
This issue will occur whenever you try to reduce big chunk of data ( in this case 15GB) that may not be contiguous in the file-system because you have files scatted everywhere. 

Try   the following methods one by one until your issue fixed

1. Try to defrag the FS:

#defragfs -s /orafs1

2. Reduce in smaller chunks:

If you still can't reduce it after this. Try reducing the filesystem  in smaller chunks. Instead of 15G at a time, try reducing 1 or 2 gigs. Then, repeat the operation.

3. Check the processes:

Sometimes processes open big files and use lots of temporary space in those filesystem.
You could check processes/applications running against the filesystem and stop them temporarily, if you can. 
#fuser -cu[x] <filsystem>

4. Move the large files and try shrink

Try looking for files large using the find cmd and move them out temporarily, just to see if we can shrink the fs without them:
#find /<filesystem> -xdev -size +2048 -ls|sort -r +10|pg

Finally the last method, the alternative approach if any one of above methods are not working then go for filesystem recreation.

==> You should be very care full , need to take fs backup and as well as approach application before removing the filelsystem.

5) Recreate filesystem:

  • - Take databackup of the fielsystem  ( very Important,dont skip this )
  •   Either you can take using your backup tools like TSM / netbackup or move data to a temporary   directory

  • - Remove the  filesystem  (  #rmfs /orafs)
  • - Create the filesystem again
  •    #mklv -y oralv1 -t jfs2 oravg 600  ( in this case we need 75GB and pp size is 128)
       #crfs -v jfs2 -d oralv1 -m /orafs1 -A yes  (create orafs1 filesystem)

  • - Restore data to the filesystem
  • - Verify fs size

  • root@umaix /tmp>df -g /orafs1
    Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
    /dev/oralv1   75.00    50.00   33%      555     1%  /orafs1



---------------------------------------------------------------------------------------------------------------------------------



Explanations to the behavior of shrinkfs:

In the beginning of the JFS2 filesystem, there is the superblock, the superblock backup, and then the data and metadata of the filesystem.

At the end is the inline log (if there is one), and the fsck working area.

The way the filesystem shrink works is this:  When chfs is run and a size is given (either -NUM or an absolute NUM size) AIX calculates

where that exists within the filesystem.  This marker is known as "the fence".
The system then calculates how much data is left outside the fence, that must be moved inside it (since we don't want to lose data).

It calculates the free space available, and subtracts a minimal amount for the fsck working area and inline log (if any) that must go at the tail
end of the filesystem.

What chfs has to do is some complex calculating: in the area outside the fence, is there any data to be saved and moved inside? 

In the area inside the fence, how much data is there?  Is it contiguous?  How much free space is there we have to play with? 

Is there enough space to move the data from outside the fence inside it to save it?  

And lastly, is there enough space to move the fsck working area and inline logs inside also along with these?

It does not try to reorganize the data in any way.  If a large file outside the fence is make up of contiguous extents, then AIX looks for
an equivalent contiguous free space area inside the fence to move the file to.  If it can't find one, either due to a lack of space or free
space fragmentation, it fails this operation and won't shrink the filesystem.  The chfs shrink will also not purposely fragment a file to
force it to fit within fragmented free space.

In some cases running defragfs on the filesystem to defragment the files will help, but many times it doesn't. 

The reason is because the purpose of defragfs is to coalesce files into more contiguous extents, 

but not to coalesce the free space in between them.

If non-contiguous free space is the issue, the only way to get them to coalesce into large enough regions is to back up the data, remove it, and restore it.  

Then the filesystem shrink may find enough contiguous free space when chfs is run to move the data outside the fence into.


There's a limit to how much chfs can shrink a filesystem. This is because chfs has to take into account not only the data you are moving around, 

but it tries to keep the contiguous blocks of data in files still contiguous. So if you have a filesystem with a lot of space that is broken up into small areas, 

but you are moving around large files it may fail even though it looks like you have a lot of space left to shrink.

The free space reported by the df command is not necessary the space that can be truncated by a shrinkFS request due to filesystem fragmentation. 

A fragmented filesystem may not be shrunk if it does not have enough free space for an object to be moved out of the region to be truncated, 

and shrinkFS does not perform filesystem defragmentation. 

In this case, the chfs command should fail with the returned message: 

chfs: There is not enough free space to shrink the file system - return code 28 (ENOSPC).

One of the common areas we see that limits customers is the inclusion of large, unfragmented files in a filesystem, such as binary database files. 

If a filesystem consists of a few, but extremely large files, depending on how these are laid out the chfs may fail to find enough space to move the data 

from outside the fence into it if it were to attempt to shrink the filesystem

+ Recent posts