Linux HugePages


* What you may already know:

 - Larger (usually 2M depending on architecture) memory page size
 - Processes share page table entries, less kernel page table management overhead, less TLB misses
 - Always resident memory
 - On Oracle11g, can be enabled only if AMM is disabled (memory_target set to 0)
 - MOS notes 361323.1 (HugePages on Linux: What It Is... and What It Is Not...), 744769.1, 748637.1
 - Similar to Solaris ISM (intimate shared memory) in almost every way
New in 11.2.0.2 (and up):
 - MOS note 1392497.1 (USE_LARGE_PAGES To Enable HugePages In 11.2)
New in SLES11, RHEL6, OEL6 and UEK2:
 - MOS note 1557478.1 (Disable Transparent HugePages on SLES11, RHEL6, OEL6 and UEK2 Kernels)




* HugePages memory only shows as resident memory on Red Hat 4, not 5

(Actually, it's most likely Linux kernel, not Red Hat version, dependent.) On RHEL 4 server, when 
HugePages is used, `top' or `ps' shows that Oracle process's resident memory is only slightly 
smaller than virtual memory. But on RHEL 5, resident memory is very much smaller. This, however, 
does not change the fact that HugePages memory is guaranteed to be locked in RAM. David Gibson, a 
HugePages developer, says in private email "hugepages are always resident, never swapped. This 
[RHEL 5 showing non-resident HugePages] must be something changed in the wider MM code".


* vm.nr_hugepages, memlock, and SGA sizing

Process memory lock limit, memlock in /etc/security/limits.conf in KB, can be set to any number 
larger than vm.nr_hugepages (in /etc/sysctl.conf, multiplied by Hugepagesize). It's just a 
mathematical number to compare with when the process wants to lock a memory segment into RAM to 
see if it's allowed. The memlock setting in limits.conf alone won't actually set aside memory. So 
it's OK to set it to a very big number. But vm.nr_hugepages actually allocates memory. It must 
not be too large or your box would run out of memory or can't boot.

If after starting instance HugePages is found to be not used, lower SGA a lot and try again. Then 
gradually add SGA back.


* Don't waste memory

If HugePages_Free is not much smaller than HugePages_Total in /proc/meminfo, you're wasting too 
much memory allocated as HugePages not being used, because only SysV type shared memory, such as 
Oracle SGA, can use HugePages. You can either increase Oracle SGA to close to HugePages_Total 
(note its unit is Hugepagesize), or decrease HugePages_Total, by 
echo newvalue > /proc/sys/vm/nr_hugepages 
or by editing /etc/sysctl.conf and running `sysctl -p'. Although increasing HugePages dynamically 
may not work because the system free memory may have already been fragmented, decreasing it always 
works.


* Writing to /proc/sys/vm/nr_hugepages

That "file" is writable. If you `echo {the number in /etc/sysctl.conf} > /proc/sys/vm/nr_hugepages', 
it's like running `sysctl -p' except you're only updating vm.nr_hugepages. Keep running `echo' and 
`cat nr_hugepages' to see how many HugePages the OS actually created.


* Checking usage

/proc/meminfo or /proc/sys/vm/nr_hugepages is the file we'll look at (unless you're interested in 
NUMA node specific values which are in /sys/devices/system/node/node*/meminfo). Focus on 
HugePages_* and PageTables. Suppose the HugePages_* numbers are like the following:

HugePages_Total:  3090
HugePages_Free:   2358
HugePages_Rsvd:   2341

It means 3090-2358=732 pages are used (actually being used, not just reserved to be used). 2341 pages 
are reserved. So the wastage, i.e. HugePages memory that will never be used, is 2358-2341=17 pages. 
In short, you can say that 
HugePages_Total - HugePages_Free + HugePages_Rsvd 
is or will be used. 

Based on this understanding, you can use the difference between HugePages_Free and HugePages_Rsvd 
to check whether your HugePages are used by Oracle SGA. Since 2358-2341=17 pages or 34 MB is small, 
we're good. Don't assume HugePages is not used just because the numbers are big, because an instance 
just started hasn't really used the memory yet.

Here's an easy way to understand the HugePages entries in /proc/meminfo.
(Ref: http://www.freelists.org/post/oracle-l/OT-Blog-entry-on-hugepages,20)

UUUUUFFFF <-- Total split into really used (U) and free (F)
UUUUURRR. <-- Total split into really used (U), reserved (R) and really free (.)

If one letter or dot is one HugePage, the above says

HugePages_Total: 9
HugePages_Free:  4
HugePages_Rsvd:  3

and you'll have 4-3=1 page completely wasted.

If you want to check more, `strace -f -e trace=process sqlplus / as sysdba' and startup. 
Look for SHM_HUGETLB in 3rd arg to shmget(). Unfortunately, because Linux shmat() doesn't have the 
option for this flag, tracing listener to follow down to the clone()'d server process won't 
work.[note1] Also, Linux doesn't have -s option for `pmap' as on Solaris to check page size for the 
individual mappings inside a process memory space.[note2]

memlock affects shell's `ulimit -l' setting. Make sure your shell has the desired setting before 
starting DB instance.

You can check how the numbers HugePages_Free and HugePages_Rsvd change while you startup or 
shutdown an instance that uses HugePages (adjust grep pattern as needed):

while true; do
 for i in $(grep ^Huge /proc/meminfo | head -3 | awk '{print $2}'); do
  echo -n "$i "
 done
 echo ""
 sleep 5
done

The output is like the following (numbers are HugePages_Total, HugePages_Free, HugePages_Rsvd):

512 225 192
512 225 192
512 225 192
512 512 0   <- Instance down. All HugePages freed. (This is the last moment of database shutdown.)
512 512 0
512 371 338 <- Startup. 338 pages free but reserved (i.e. 371-338=33 pages "real" free), 512-371=141 pages used
512 329 296 <- 512-329=183 pages used, up by 183-141=42, reserved pages down by 42, "real" free unchanged
512 227 194 <- 512-227=285 pages used, up by 285-183=102, reserved down by 102 too, "real" free unchanged

It indicates that when the instance is started, HugePages memory pages are immediately reserved. 
This is a fast process because there's no write to the pages (remember reserved is just a special 
type of free; see http://linux-mm.org/DynamicHugetlbPool). Then when the pages are written to, 
they're taken off of the reserved list and used. This server has 33 "real" free pages wasted. I 
could have done better diligence to not assign them to HugePages.

Note that older versions of HugePages code doesn't show reserved pages. On Red Hat Linux, the change is 
between RHEL 4 and 5.


* 11g AMM

11g Automatic Memory Management includes PGA into auto management. But PGA can never be allocated 
from HugePages memory.[note3] I would set memory_target to 0 to disable AMM and configure HugePages as 
usual. HugePages is a far more appealing feature than AMM. If I have to sacrifice one of the two, 
I sacrifice AMM. The usage of SGA and PGA is so different they should be separately managed anyway. 
To name one issue with AMM, it requires hundreds if not thousands of descriptors for *each* server 
process to open *all* the files under /dev/shm, most likely 4 MB each (SGA granule size, _ksmg_granule_size). 
See http://download.oracle.com/docs/cd/B28359_01/install.111/b32002/pre_install.htm#sthref71


* In 11.2.0.1, due to Bug 9251136 "INSTANCE WILL NOT USE HUGEPAGE IF STARTED BY SRVCTL". The root cause
is that process ulimit settings (as in /etc/security/limits.conf) are not used. To confirm, compare
`grep "locked memory" /proc/<any instance process pid>/limits' with your setting. You can work around 
the bug by setting the limit in either $CRS_HOME/bin/ohasd or /etc/init.d/ohasd, but not /etc/profile 
or /etc/init.d/init.ohasd:

# diff /u01/app/11.2.0/grid/bin/ohasd /u01/app/11.2.0/grid/bin/ohasd.bak
5,7d4
< #20100319 YongHuang: Increase process max locked memory to allow HugePages as workaround for Bug 9251136
< ulimit -l 8590000
<

If modifying $CRS_HOME/bin/ohasd, remember to update it whenever you apply a clusterware patch. If possible, 
modifying /etc/init.d/ohasd is preferred even though it's owned by root, because it's less likely to be 
overwritten. 


* Oracle instance still doesn't use HugePages. Now what?
Try decreasing SGA a little.
Try a simple non-Oracle-specific program:
http://www.mjmwired.net/kernel/Documentation/vm/hugepage-shm.c
Make sure /proc/sys/kernel/shmmax and /proc/sys/kernel/shmall are big enough. They can be set to 
very big numbers with no ill effect.


* Transparent HugePages
Reference MOS 1557478.1
The directory /sys/kernel/mm/transparent_hugepages should be transparent_hugepage (singular). In fact, 
it may not exist and (if exists) is a link to redhat_transparent_hugepage.
You can dynamically disable it by `echo never > enabled' and `echo never > defrag'.


* Further reading

http://linux-mm.org/HugePages
For Transparent HugePages:
http://www.mjmwired.net/kernel/Documentation/vm/transhuge.txt
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/

_____________
[note1] On Solaris, you can run `truss -f -p <listener PID>' and connect to the database through 
Oracle Net. The trace will show e.g.
shmat(1979711503, 0x40280000000, 040000)        = 0x40280000000
where 040000 is SHM_SHARE_MMU according to /usr/include/sys/shm.h.

[note2] At least not until this patch is in your kernel: http://lkml.org/lkml/2008/10/3/250. That patch 
may be in kernel 2.6.29 (http://www.kernel.org/pub/linux/kernel/v2.6/). Once you have that, as on Red Hat 
Linux 6, you'll have KernelPageSize and MMUPageSize in /proc/<pid>/smaps 
(Ref: http://www.freelists.org/post/oracle-l/OT-Blog-entry-on-hugepages,20):

# cat /proc/<any pid of Oracle instance>/smaps
...
61000000-a7000000 rwxs 00000000 00:0c 1146885                            /SYSV00000000 (deleted)
Size:            1146880 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:     2048 kB  <-- 2MB HugePage size
MMUPageSize:        2048 kB  <-- 2MB HugePage size

With that knowledge, you can check if a running Oracle instance is using HugePages without checking alert.log 
(which may have been recycled or takes longer to check):

$ grep ^KernelPageSize /proc/15824/smaps | grep -v '4 kB$' #15824 is an Oracle process
KernelPageSize:     2048 kB
...

No output means not using HugePages; all pages are of standard 4k size.

[note3] For now at least. See Kevin Closson's blog for more:
http://kevinclosson.wordpress.com/2007/08/23/oracle11g-automatic-memory-management-and-linux-hugepages-support/


+ Recent posts