이 섹션에서 상세하게 설명된 것처럼, big pages를 활성화하는 것은 TLB misses를 감소하도록 할 수 있다. 그러나 이러한 성능 향상은 우선적으로 large SGA size를 사용할 때 구현된다. 일단 메모리의 일부가 big pages로 locked down되면, 일반적인 메모리 페이지들을 사용하는 애플리케이션들은 메모리의 그 부분에 접근할 수가 없다. 다른 애플리케이션에 대한 일반 메모리 페이지에 대해 메모리가 충분한지 확인하여 과도한 swapping을 피하는 것은 매우 중요하다. 따라서, 많은 양의 물리적인 메모리와 16GB 또는 그 이상의 SGA size를 가진 시스템에서만 사용되는 것이 권고된다.
As explained in detail in this section, enabling big pages helps reduce TLB misses. However, this performance benefit is realized primarily when using large SGA sizes. Once a portion of memory is locked down for big pages, applications that use normal pages cannot access that portion of the memory. It is very important to make sure that there is enough memory for normal pages for applications and users to avoid excessive swapping. So, it is recommended that big pages be used only on systems that have large amounts of physical memory and for SGA sizes of 16GB or greater.
Red Hat Enterprise Linux 2.1에서의 Big Pages와 Red Hat Enterprise Linux 3,4,5에서의 Huge Pages는 매우 큰 Oracle SGA size와 일반적으로 많은 양의 물리적인 메모리를 가진 시스템에 에 대해 매우 유용하다. Translation Lookaside Buffers(TLB)의 사용을 최적화하고, RAM에서 이러한 large pages를 lock하며, large page sizes 로 인해 시스템이 virtual memory의 일부에 대해 덜 계산하도록 한다.
물리적 메모리는 메모리 관리의 기본적인 unit인 pages로 나뉘어진다. 리눅스 프로세스가 virtual address에 접근할 때, CPU는 physical address로 translage해야만 한다. 그러므로, 각 리눅스 프로세스를 위해, 커널은 CPU에 의해 virtual address를 physical address로 translate하는데 사용되는 page table(페이지 테이블)을 관리한다. 그런데 CPU가 translation을 하기 전에, 그것은 page table information(페이지 테이블 정보)를 얻기 위한 일부 physical memory reads를 수행한다. 동일한 virtual address에 대한 다음 번의 참조에 대한 translation 처리 속도를 향상하기 위해, CPU는 최근에 접근한 virtual address에 대한 정보를 Translation Lookaside Buffers(TLB)에 저장한다. 그리고 그것은 CPU에서 작지만 매우 빠른 cache이다. 이러한 캐시의 사용은 virtual memory 접근을 매우 빠르게 한다. TLB misses는 그 댓가가 크기 때문에, 큰 연속적인 physical memory 영역을 작은 수의 pages로 매핑함으로서, TLB hits는 향상될 수 있다. 그리고 보다 큰 virtual address 영역을 커버하기 위해 보다 적은 TLB 엔트리가 필요하게 된다. 줄어든 page table size 역시 메모리 관리 오버헤드에서의 감소를 의미한다. shared memory로 large page sizes를 사용하기 위해, physical memory에서 이러한 페이지들을 lock시키는 Big Pages(Red Hat Enterprise Linux 2.1) 또는 Huge Pages (Red Hat Enterprise Linux 3,4,5)가 활성화 되어야 한다.
Big Pages in Red Hat Enterprise Linux 2.1 and Huge Pages in Red Hat Enterprise Linux 3, 4 and 5 are very useful for large Oracle SGA sizes and in general for systems with large amount of physical memory. It optimizes the use of Translation Lookaside Buffers (TLB), locks these larger pages in RAM, and the system has less bookkeeping work to do for that part of virtual memory due to larger page sizes. This is a useful feature that should be used on x86 and x86-64 platforms. The default page size in Linux for x86 is 4KB.
Physical memory is partitioned into pages which are the basic unit of memory management. When a Linux process accesses a virtual address, the CPU must translate it into a physical address. Therefore, for each Linux process the kernel maintains a page table which is used by the CPU to translate virtual addresses into physical addresses. But before the CPU can do the translation it has to perform several physical memory reads to retrieve page table information. To speed up this translation process for future references to the same virtual address, the CPU saves information for recently accessed virtual addresses in its Translation Lookaside Buffers (TLB) which is a small but very fast cache in the CPU. The use of this cache makes virtual memory access very fast. Since TLB misses are expensive, TLB hits can be improved by mapping large contiguous physical memory regions by a small number of pages. So fewer TLB entries are required to cover larger virtual address ranges. A reduced page table size also means a reduction in memory management overhead. To use larger page sizes for shared memory, Big Pages (Red Hat Enterprise Linux 2.1) or Huge Pages (Red Hat Enterprise Linux 3, 4 and 5) must be enabled which also locks these pages in physical memory.
In Red Hat Enterprise Linux 2.1 large memory pages can be configured using the Big Pages,
bigpages, feature. In Red Hat Enterprise Linux 3 or 4 Red Hat replaced Big Pages with a feature called Huge Pages,
hugetlb, which behaves a little bit different. The Huge Pages feature in Red Hat Enterprise Linux 3 or 4 allows you to dynamically allocate large memory pages without a reboot. Allocating and changing Big Pages in Red Hat Enterprise Linux 2.1 always required a reboot. However, if memory gets too fragmented in Red Hat Enterprise Linux 3 or 4 allocation of physically contiguous memory pages can fail and a reboot may become necessary.
The advantages of Big Pages and Huge Pages for database performance are:
increased performance through increased Translation Lookaside Buffer (TLB) hits.
pages are locked in memory and are never swapped out which guarantees that shared memory such as SGA remains in RAM.
contiguous pages are pre-allocated and cannot be used for anything else but for System V shared memory, for example SGA.
less bookkeeping work for the kernel in that part of virtual memory due to larger page sizes.
Big pages are supported implicitly in Red Hat Enterprise Linux 2.1. But Huge Pages in Red Hat Enterprise Linux 3, 4 and 5 need to be requested explicitly by the application by using the
SHM_HUGETLBflag when invoking the
shmget()system call. This ensures that shared memory segments are allocated out of the Huge Pages pool. This is done automatically in Oracle 10g and 9i R2 (18.104.22.168) but earlier Oracle 9i R2 versions require a patch, see Metalink Note:262004.1.
With the Big Pages and Huge Pages feature you specify how many physically contiguous large memory pages should be allocated and pinned in RAM for shared memory like Oracle SGA. For example, if you have three Oracle instances running on a single system with 2 GB SGA each, then at least 6 GB of large pages should be allocated. This will ensure that all three SGAs use large pages and remain in main physical memory. Furthermore, if you use ASM on the same system, then it is prudent to add an additional 200MB. I have seen ASM instances creating between 70 MB and 150 MB shared memory segments. And there might be other non-Oracle processes that allocate shared memory segments as well.
It is, however, not recommended to allocate too many Big or Huge Pages. These pre-allocated pages can only be used for shared memory. This means that unused Big or Huge Pages will not be available for other use than for shared memory allocations even if the system runs out of memory and starts swapping. Also take note that Huge Pages are not used for the
ramfsshared memory file system, see Section 14.8, “Huge Pages and Shared Memory File System in Red Hat Enterprise Linux 3”, but Big Pages can be used for the
shmfile system in Red Hat Enterprise Linux 2.1.
It is very important to always check the shared memory segments before starting an instance. An abandoned shared memory segment, from an instance crash for example, is not removed, it will remain allocated in the Big Pages or Huge Pages pool. This could mean that new allocated shared memory segments for the new instance SGA will not fit into the Big Pages or Huge Pages pool. For more information on removing shared memory, see Section 7.4, “Removing Shared Memory”.
Before configuring Big Pages, ensure to have read Section 14.3, “Sizing Big Pages and Huge Pages”.
In Red Hat Enterprise Linux 4 or 5 the size of the Huge Pages pool is specified by the desired number of Huge Pages. To calculate the number of Huge Pages you first need to know the Huge Page size. To obtain the size of Huge Pages, execute the following command:
$ grep Hugepagesize /proc/meminfo Hugepagesize: 2048 kB $
The output shows that the size of a Huge Page on this system is 2MB. This means if a 1GB Huge Pages pool should be allocated, then 512 Huge Pages need to be allocated. The number of Huge Pages can be configured and activated by setting
nr_hugepagesin the proc file system. For example, to allocate 512 Huge Pages, execute:
# echo 512 > /proc/sys/vm/nr_hugepages
Alternatively, you can use
sysctl(8)to change it:
# sysctl -w vm.nr_hugepages=512
To make the change permanent, add the following line to the file
/etc/sysctl.conf. This file is used during the boot process. The Huge Pages pool is usually guaranteed if requested at boot time:
# echo "vm.nr_hugepages=512" >> /etc/sysctl.conf
If you allocate a large number of Huge Pages, the execution of the above commands can take a while. To verify whether the kernel was able to allocate the requested number of Huge Pages, run:
$ grep HugePages_Total /proc/meminfo HugePages_Total: 512 $
The output shows that 512 Huge Pages have been allocated. Since the size of Huge Pages is 2048 KB, a Huge Page pool of 1GB has been allocated and pinned in physical memory.
HugePages_Totalis lower than what was requested with
nr_hugepages, then the system does either not have enough memory or there are not enough physically contiguous free pages. In the latter case the system needs to be rebooted which should give you a better chance of getting the memory.
To get the number of free Huge Pages on the system, execute:
$ grep HugePages_Free /proc/meminfo
Free system memory will automatically be decreased by the size of the Huge Pages pool allocation regardless whether the pool is being used by an application like Oracle DB or not:
$ grep MemFree /proc/meminfo
In order that an Oracle database can use Huge Pages in Red Hat Enterprise Linux 4 or 5, you also need to increase the
memlock" for the oracle user in
max locked memory" is not unlimited or too small, see ulimit -a or ulimit -l. An example can be seen below.
oracle soft memlock 1048576 oracle hard memlock 1048576
memlockparameter specifies how much memory the oracle user can lock into its address space. Note that Huge Pages are locked in physical memory. The memlock setting is specified in KB and must match the memory size of the number of Huge Pages that Oracle should be able to allocate. So if the Oracle database should be able to use 512 Huge Pages, then memlock must be set to at least 512 *
Hugepagesize, which on this system would be 1048576 KB (512*1024*2). If memlock is too small, then no single Huge Page will be allocated when the Oracle database starts. For more information on setting shell limits, see Chapter 22, Setting Shell Limits for Your Oracle User.
Log in as the oracle user again and verify the new memlock setting by executing
ulimit -lbefore starting the database.
After an Oracle DB startup you can verify the usage of Huge Pages by checking whether the number of free Huge Pages has decreased:
$ grep HugePages_Free /proc/meminfo
To free the Huge Pages pool, you can execute:
# echo 0 > /proc/sys/vm/nr_hugepagesThis command usually takes a while to finish.
|[RHEL] What is the maximum support for hugepages (0)||2014.03.08|
|[RHEL] What is an approprite memlock value in limits.conf when using hugepages for an Oracle DB on RHEL (0)||2014.03.08|
|[RHEL] Enabling hugepages for use with Oracle Database (0)||2014.03.08|
|[RHEL] Linux HugePages and virtual memory (VM) tuning (0)||2014.03.08|
|[RHEL] hugetlbpage.txt (0)||2014.03.08|
|[RHEL] Large Memory Optimization, Big Pages, and Huge Pages (0)||2014.03.08|
|[RHEL] Large Page Memory (0)||2014.03.08|
|[RHEL] Understanding Linux buffers/cached (0)||2014.01.29|
|A quick overview of Linux kernel crash dump analysis (0)||2014.01.29|
|[RHEL] Analyzing the Core Dump (0)||2014.01.29|
|[RHEL] How to analyze and interpret sar data. (0)||2014.01.29|