Accept Cookie


Accept Cookies Cookies are essential for itway website to function and give you the best online experience. Click "Accept Cookies" to continue or click "More Information" to see detailed descriptions of types of cookies and choose whether to accept them or not.

Haberler & Etkinlikler > Haberler > Detay
Sanal Check Point yönetim konsolu üzerinde performans optimizasyonu..
01.04.2016

Best Practices - Performance Optimization of Security Management Server installed on VMware ESX Virtual Machine

Product:

 

Security Management, Multi-Domain Management / Provider-1, SmartEvent / Eventia Analyzer

Version:

 

All

OS:

 

Gaia, SecurePlatform 2.6, Linux

Platform / Model:

 

VMWare ESX

 

Solution

 

The following configurations are highly encouraged in order to optimize Check Point Security Management Server installed on VMware Virtual Machine:

Virtual Machine Guest Operating System

Optimal virtual hardware presentation can be achieved when manually building a Virtual Machine running Check Point software by defining the Guest operating system as "RedHat Enterprise Linux version 5 (64-bit)".

Disk

Always use Thick provisioning (thick/lazy is acceptable), never Thin-provision disk resources.

Make sure the disk partitions within the guest are aligned.

Unaligned or misaligned partitioning results in the I/O crossing a track boundary, resulting in additional I/O. This incurs a penalty on both latency and throughput. The additional I/O can impact system resources significantly on some host types and some applications - especially disk-intensive applications, such as SmartEvent or heavily loaded logging modules. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement.

For more information and remediation, refer to the documentation of the SAN provider.

Memory

Allocate at least 6 GB of memory to the Virtual Machine. For Virtual Machines running Multi-Domain Security Management Server, plan to allocate 6 GB for the base installation plus 1 GB for each additional Domain. Consider reserving 50% of the memory allocated and consider increasing the Virtual Machine's resource shares allocation.

vCPUs

In multi-CPU (SMP) guests, the guest operating system can migrate processes from one vCPU to another. This migration can incur a small CPU overhead. If the migration is very frequent, it might be helpful to pin guest threads or processes to specific vCPUs. Allocate only the number of vCPUs as is necessary. In most Security Management Server (single-domain) implementations, use no more than two (2x) CPUs. For heavily-subscribed environments, consider reserving at least 30% of the CPU frequency and consider increasing the CM's resource shares allocation.

Virtual Network Adapter

The default virtual network adapter emulated in a Virtual Machine is either an AMD PCnet32 device (vlance / "Flexible"), or an Intel E1000 device (E1000). Never utilize the "Flexible" NIC driver in SecurePlatform OS / Gaia OS, as it has been shown to carry a significant performance penalty. In most cases, Check Point recommends the Intel E1000 device be utilized. When configuring the guest Virtual Machine as noted above, this is the default NIC emulation.

VMware also offers the VMXNET family of paravirtualized network adapters. The VMXNET family contains VMXNET, Enhanced VMXNET (available since ESX/ESXi 3.5), and VMXNET Generation 3 (VMXNET3; available since ESX/ESXi 4.0). The latest releases of the Gaia OS include the VMXNET drivers integrated, but R&D recommends against using these drivers except in cases where Check Point Security Gateway VE R77.10 or newer is used.

In some cases, low receive throughput in a Virtual Machine can be caused by insufficient receive buffers in the receiver network device. If the receive ring in the guest operating system's network driver overflows, packets will be dropped in the VMkernel, degrading network throughput. A possible workaround is to increase the number of receive buffers, though this might increase the host physical CPU workload. For VMXNET3 and E1000, the default number of receive and transmit buffers are controlled by the guest driver, with the maximum possible for both being 4096.

Time

  1. For the most accurate time keeping, configure the system to use NTP. The VMware Tools time-synchronization option is not considered a suitable solution. Versions prior to ESXi 5.0 were not designed for the same level of accuracy and do not adjust the guest time when it is ahead of the host time. Ensure that the VMware Tools time-synchronization feature is disabled.
  1. Change the timer interrupt rate.

For Gaia OS and SecurePlatform OS installations, add the following kernel parameters in the /boot/grub/grub.conf file:

notsc divider=10 clocksource=acpi_pm

Example:

title Start in normal mode
        root (hd0,0)
        kernel /vmlinuz ro  vmalloc=256M noht notsc divider=10 clocksource=acpi_pm root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
        initrd /initrd

For additional information about best practices for time keeping within Virtual Machines, refer to:

I/O Scheduling

As of the Linux 2.6 kernel, the default I/O Scheduler is Completely Fair Queuing (CFQ). Testing has shown that NOOP or Deadline perform better for virtualized Linux guests. ESX uses an asynchronous intelligent I/O scheduler, and for this reason virtual guests should see improved performance by allowing ESX to handle I/O scheduling.

This change can be implemented in a few different methods.

The scheduler can be set for each hard disk unit. To check which scheduler is being used for particular drive, run this command:
# cat /sys/block/disk/queue/scheduler

For example, to check the current I/O scheduler for disk sda:
# cat /sys/block/sda/queue/scheduler[noop] anticipatory deadline cfq

In this example, the sda drive scheduler is set to NOOP.

To change the scheduler on a running system, run this command:
# echo > /sys/block//queue/scheduler

For example, to set the I/O scheduler for disk sda to NOOP:
# echo noop > /sys/block/sda/queue/scheduler

Note: This command will not change the scheduler permanently. The scheduler will be reset to the default on reboot. To make the system use a specific scheduler by default, add an elevator parameter to the default kernel entry in the GRUB boot loader /boot/grub/menu.lst file.

For example, to make NOOP the default scheduler for the system, the /boot/grub/menu.lst kernel entry would look like this:

title CentOS (2.6.18-128.4.1.el5)
                    root (hd0,0)
                    kernel /vmlinuz-2.6.18-128.4.1.el5 ro root=/dev/VolGroup00/LogVol00 elevator=noop
                    initrd /initrd-2.6.18-128.4.1.el5.img

With the elevator parameter in place, the system will set the I/O scheduler to the one specified on every boot.

Disk Queue Depth

Changing the disks queue_depth increases the amount of Disk I/O throughput. For each disk presented to the Virtual Machine, change the queue depth as follows:

# echo "975" > /sys/block/sda/device/queue_depth

I/O request queue

The I/O request queue is the size of the request - in and out. With the deadline scheduler this should be set to twice the size of the queue_depth. This provided the most improved performance of the size settings for nr_requests.

For each disk presented to the Virtual Machine, change the queue depth as follows:

# echo "1950" > /sys/block/sda/queue/nr_requests