Sunday, August 19, 2012

Using Concurrent I/O with JFS2 to increase Oracle performance

In many cases, the database performance achieved using Concurrent I/O with JFS2 is comparable to that obtained by using raw logical volumes.

The fastest means of transferring data between an application and permanent storage media such as disks, is to directly access more primitive interfaces such as raw logical volumes. The use of files for data storage involves overheads due to serialization, buffering and data copying, which impact I/O performance.
Using raw logical volumes for I/O eliminates the  overheads of serialization and buffering, but also requires a higher level of skill and training on the part of the user since data management becomes more application specific. Also, while file system commands do not require system administrator privileges, commands for manipulating raw logical volumes do. However, due to its superior performance, database applications have traditionally preferred to use raw logical volumes for data storage, rather than using file systems.

With the Concurrent I/O feature now available in JFS2, database performance on file systems rivals the performance achievable with raw logical volumes.

For applications that wish to bypass the buffering of memory within the file system cache Direct I/O is provided as an option in JFS2. When Direct I/O is used for a file, data is transferred directly from the disk to the application buffer, without the use of the file buffer cache.

Direct I/O can be used for a file either by mounting the corresponding file system with the mount –o dio option, or by opening the file with the  O_DIRECT flag specified in the  open() system call. When a file system is mounted with the  –o dio option, all files in the file system use Direct I/O by default. Direct I/O can be restricted to a subset of the files in a file system by placing the files that require Direct I/O in a separate subdirectory and using  namefs to mount this subdirectory over the file system. For example, if a file system  somefs contains some files that prefer to use Direct I/O and others that do not, we can create a subdirectory,  subsomefs, in which we place all the files that require Direct I/O. We can mount somefs without specifying –o dio, and then mount  subsomefs as a  namefs file system with the  –o dio option using the command:  mount –v namefs –o dio /somefs/subsomefs /somefs.

Direct I/O benefits applications by reducing CPU consumption and eliminating the overhead of copying data twice – first between the disk and the file buffer cache, and then from the file buffer cache to the application’s buffer.

The inode lock imposes write serialization at the file level. Serializing write accesses ensures that data inconsistencies due to overlapping writes do not occur.  Serializing reads with respect to writes ensures that the application does not read stale data.
Oracle implements his own data serialization, usually at a finer level of granularity than the file. Such
applications implement serialization mechanisms at the application level to ensure that data inconsistencies do not occur, and that stale data is not read. Consequently, they do not need the file system to implement this serialization for them. The inode lock actually hinders performance in such cases, by unnecessarily serializing non-competing data accesses. For such applications, AIX offers the Concurrent I/O (CIO) option. Under Concurrent I/O, multiple threads can simultaneously perform reads and writes on a shared file. This option is
intended primarily for relational database applications, most of which will operate under Concurrent I/O without any modification.  Applications that do not enforce serialization for accesses to shared files should not use Concurrent I/O, as this could result in data corruption due to competing accesses.

Concurrent I/O can be specified for a file either through the mount command (mount –o cio), or via the  open() system call (by using  O_CIO as the OFlag parameter). When a file system is  mounted with the  –o cio option, all files in the file system use Concurrent I/O by default. Just as with Direct I/O, Concurrent I/O can be restricted to a subset of the files in the file system by placing the files that use Concurrent I/O in a separate subdirectory and using namefs to mount this subdirectory over the file system. For example, if a file system  somefs contains some files that prefer to use Concurrent I/O and others that do not, we can create a subdirectory, subsomefs containing all the files that use Concurrent I/O. We can mount  somefs without the –o cio option, and then mount subsomefs as a namefs file system with the  –o cio option:  mount –v namefs –o cio /somefs/subsomefs /somefs.

The use of Direct I/O is implicit with Concurrent I/O, and files that use Concurrent I/O automatically use the Direct I/O path. Thus, applications using Concurrent  I/O are subject to the same alignment and length restrictions as Direct I/O.

As with Direct I/O, if there are multiple outstanding opens to a file and one or more of the calls did not specify O_CIO, then Concurrent I/O is not enabled for the file. Once the last conflicting access is eliminated, the file begins to use Concurrent I/O. Since Concurrent I/O implicitly uses Direct I/O, it overrides the O_DIO flag for a file.

Since Concurrent I/O implicitly invokes Direct I/O, all the performance considerations for Direct I/O mentioned in Section hold for Concurrent I/O as well. Thus, applications that benefit from file system read-ahead, or have a high file system buffer cache hit rate, would probably see their performance deteriorate with Concurrent I/O, just as it would with Direct I/O. Concurrent I/O will also provide no benefit for applications in which the vast majority of data accesses are reads. In such environments, read-shared, write-exclusive inode locking will already provide most of the benefits of Concurrent I/O. 

Applications that use raw logical volumes for data storage don’t encounter inode lock contention since they don’t access files.

using JFS2 Concurrent I/O for databases results in performance comparable to that achieved through the use of raw logical volumes for database storage, while providing greater flexibility and ease of administration.

Concurrent I/O combines all the performance advantages of using raw logical volumes, while greatly simplifying the task of database administration. This makes Concurrent I/O a very attractive option for database storage.

No comments:

Post a Comment