WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] Bind guest with with NUMA ndoe.

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] Bind guest with with NUMA ndoe.
From: "Duan, Ronghui" <ronghui.duan@xxxxxxxxx>
Date: Tue, 26 Feb 2008 11:40:21 +0800
Delivery-date: Mon, 25 Feb 2008 19:51:42 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach4KVAEekvoZpcMS0CmWaXBzIm2DQ==
Thread-topic: [PATCH] Bind guest with with NUMA ndoe.

Hi Keir;

 

Currently base on Xen’s scheduler, if users don’t set vcpu affinity, Vcpu can be run on all p-cpus in machine. If it is a NUMA machine, performance will be down because of memory latency in memory access when CPU and memory are on different nodes. So I think their may be need to supply a mechanism to make xen run better on NUMA machine even if users don’t set vcpu affinity. I think out policies:

 

1: Don’t make any changes and only supply node free memory info to help guest to set proper VCPU affinity which has been realized in my last patch.

 

2: When set max-vcpu in domain build, we can choose a node base on nowadays policy of choose CPU to locate VCPU which mainly considers CPU balance. Then set this node’s cpumask to all VCPUS’ affinity to bind domain on this node.  The disadvantage of this method is after setting max-vcpu, if user configures VCPU affinity, VCPU affinity will be set again. This is done in first patch attached.

 

3: We can do this in CP. If user doesn’t set VCPU affinity, we can choose a VCPU affinity for guest domain. This need a new policy to choose which node guest will run on NUMA machine. I think it is reasonable to consider memory usage first. I do this in the second patch. This patch depends on my last patch of get free memory size per node.  

 

Which method do you prefer? Comments are welcome. Thanks.

 

 

 

 

 

Attachment: set_vcpu_affinity_in_xen.patch
Description: set_vcpu_affinity_in_xen.patch

Attachment: set_vcpu_affinity_in_CP.patch
Description: set_vcpu_affinity_in_CP.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>