Find it

Tuesday, August 30, 2011

Disable Large Segment Offload (LSO) in Solaris 10

In this blog article, I will share my understanding on Large Segment Offload (LSO). I got a task to disable the LSO on few of the servers (include zones).

Let's first understand what is LSO stands for and what is the purpose of using LSO.

As you see above LSO stands for Large Segment Offload.

TCP Offload Engine is an embryonic technology which is designed to offload TCP stack handling from the main system CPU to a processor built into NIC cards, hence the no CPU cycle and kernel time will get consumed.

LSO saves valuable CPU cycles by allowing the network protocol stack to handle large segments instead of the traditional model of MSS (TCP Maximum Segment Size) sized segments. In the traditional network stack, the TCP layer segments the outgoing data into the MSS sized segments and passes them down to the driver. This becomes computationally expensive with 10 GigE networking because of the large number of kernel functional calls required for every MSS segment. With LSO, a large segment is passed by TCP to the driver, and the driver or NIC hardware does the job of TCP segmentation (LSO offload the segmentation job on Layer 4 to the NIC driver). An LSO segment may be as large as 64 KByte. The larger the LSO segment, better the CPU efficiency since the network stack has to work with smaller number of segments for the same throughput.

So in simple words, use LSO for better network performance while reducing processor (CPU) utilization.

Segmentation is needed if a full TCP segment does not fit into the Ethernet Maximum Transmission Unit (MTU) size. With LSO, TCP segments do not need to get split in software implementation, this is done on the interface card hardware instead. Being much more effective, this improves the network performance while reducing the workload on the CPUs. LSO is most helpful for 10 Gigabit Ethernet network interfaces and on systems with slow CPU threads or lack of CPU resources.

Solaris LSO can be used if all of the three conditions are met :

1.The TCP/IP stack integrates LSO,
2.The Network Interface Card hardware supports it (for e.g. drivers like e1000g,ixgb,ixgbe etc),
3.The driver for this network card is capable of handling it.

Sadly, in most of the cases LSO seems to be not working that well hence it leads to disable the LSO support. Here is the ways to disable the LSO.

Ways to disable LSO -

Disable LSO by adding the following line in the /kernel/drv/e1000g.conf file (I’m using the e1000g interface/driver hence the file that I'm using is /kernel/drv/e1000g.conf) :

lso_enable=0,0,0,0,0,0,0,0;

After making the changes reboot is required or else if reboot is not possible then you can use ndd utility/command to disable it on a temporary basis and not persist across the reboot.

Using ndd you can disable it as shown below -

# ndd -set /dev/ip ip_lso_outbound 0

Also if you don't want to reboot the server after modifying the file /kernel/drv/e1000g.conf you can simply unplumb all of your e1000g interfaces with ifconfig, do "update_drv e1000g" to reload the .conf file, and then replumb and reconfigure the interfaces with ifconfig however still if I'm going to unplumb the network interfaces then eventually I'll be disturbing the services so reboot is the best option.

I had to disable the LSO as our application folks were experiencing slowness in their web application (response time etc.) It looks like LSO cause unstable connections & hence there are few observations like dropped sockets, dropped packets, packet reordering, packet retransmits and ultimately application folks observed slowness in their web application, NFS stuffs etc.

10 comments:

  1. Hi.. Neelesh,
    You are absolutely right. I have little bit of knowledge but i think..
    The key advantages of TCP offload engine with NIC are:
    - By offloading TCP/ IP engine and doing all the processing of TCP/IP through NIC with full TCP offload; it frees up the system for other useful work.

    Our Related links: 10G tcp offload, 10G bit tcp offload
    Thank in advanced..

    ReplyDelete
  2. It is extremely interesting for me to read the article. Thank you for it. I like such topics and everything connected to them. I would like to read a bit more on that blog soon.
    Thanks
    TCP Offload, Full Kernel Bypass

    ReplyDelete
  3. Really Your post is so helpful.
    Thanks for posting....
    10G tcp offload

    ReplyDelete
  4. There have been many different terms and concepts associated with TCP offload and one of them is the 10G Bit TCP offload which has a significant role played in the entire process. It comes with unique and amazing features and benefits making it an exceptional offload engine. There are actually several interesting points to know about 10 G TCP offload.
    Thanks for sharing here..

    ReplyDelete
  5. TCP offload engine (TOE) has been the technology utilized in NIC or network interface cards in order to offload the processing of whole TCP/IP stack into its network controller.
    Thanks
    10 G TCP Offload

    ReplyDelete
  6. User Datagram Protocol (UDP offload) NIC for cloud computing servers, Data centers and other networked supercomputing servers. This protocol used by network file systems and other protocols.
    Full TCP Offload
    Thanks for sharing nice or decent information ...here

    ReplyDelete
  7. A full 10G TCP Offload engine can handle all the networking tasks – from data processing to connection management and error handling.

    ReplyDelete
  8. Thus, the flow of TCP is processed at a pace slower compared to the accuracy of the net. TCP offload engine solves this issue through getting rid of the burden or offloading from the I/O subsystem as well as the microprocessor.
    Thanks

    ReplyDelete
  9. TCP & UDP Hardware Acceleration
    Intilop It is one of the most respected IP developers and specializes in providing TCP & UDP Endpoint Acceleration, TCP & UDP Hardware Acceleration and TCP & UDP Kernel Bypass.

    ReplyDelete