WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and iSCSI

Hi,

Am Sonntag, 29. Januar 2006 16:59 schrieb Per Andreas Buer:
> Markus Hochholdinger wrote:
> > well, my idea of HA is as follows:
> >  - Two storage servers on individual SANs connected to the Xen hosts.
> > Each storage server provides block devices per iscsi.
> I guess gnbd can be a drop-in replacement for iSCSI. I would think
> performance is better as gnbd is written for the Linux kernel -  the
> SCSI protocol is written for hardware. I _know_ gnbd is easier to set
> up. You just point the client to the server and the client populates
> /dev/gnbd/ with the named entries (the devices are given logical names -
> no SCSI buses, devices or LUNS).

fully acknowledged. I was wrong with my thinking. I read the gnbd docs and it 
Is really what i want!


> If we compare your iSCSI-based setup to a setup with
> Heartbeat/DRBD/GNBD-setup there might be some interesting points. You
> can choose for yourself if you want the DomUs to act as GNBD clients or
> if you want to access the GNBD servers directly from your DomU - or a
> combination (through Dom0 for rootfs/swap - and via GNBD for data volumes).

Yes. The set up with heartbeat and drbd sounds (still) complicated to me. But 
using gnbd only for block devices over network seems to be the easiest way. 
And as you are saying, you can do a lot of things with gnbd. You are not 
restricted to use heartbeat, drbd or multipath!


> >  - On domU two iscsi block devices are combined to a raid1. On this raid1
> > we will have the rootfs.
> > Advantages:
> >  - storage servers can easily upgraded. Because of raid1 you can savely
> > disconnect on storage server and upgrade hard disk space. After resync
> > the raid1 you can do the same with the other storage server.
> The same with Heartbeat/DRBD/GNBD. You just fail one of the storage
> servers and upgrade it. After it is back up DRBD does an _incremental_
> sync witch usually just takes a few seconds. With such a setup you can
> use a _dedicated_ link for DRBD.

Or you make the rad1 stuff in domU. So the setup of the storage servers would 
be easier. And setting up raid1 in domU is also very easy.


> >  - If you use a kind of lvm on the storage servers you can easily expand
> > the exportet iscsi block devices (the raid1 and the filesystem has also
> > to be expanded).
> The same goes for Hearbeat/DRBD/GNBD I would guess.

And also without heartbeat and drbd.


> >  - You can make live migration without configuring the destination Xen
> > host specially (e.g. provide block devices in dom0 to export to domU)
> > because all is done in domU.
> GNBD clients are more or less stateless.

Ack.


> >  - If one domU dies or the Xen host you can easily start the domUs on
> > other Xen hosts.
> > Disadvantages:
> >  - When one storage server dies ALL domU have to rebuild their raid1 when
> > storage this storage server comes back. High traffic on the SANs.
> You will also have to rebuild a volume if a XenU dies while writing to
> disk.

Yes, that's a drawback. But somewhere you have to rebuild a raid1 if something 
fails.
I am able to rebuild the raid1 of important servers first if i make the raid1 
in domU.


> >  - Not easy to setup a new domU in this environment (lvm, iscsi, raid1)
> iSCSI for rootfs sounds lke a lot of pain.

Yeah. That#s right. You have to make all iscsi modules and iscsi programs 
available in initrd. Also the configuration is poss. in the initrd.


> > Not sure:
> >  - Performance? Can we get full network performance in domU? Ideal is we
> > can use full bandwith of the SANs (e.g. 1GBit/s). And if the SANs can
> > handle this (i will make raid0 with three SATA disks in each storage
> > server).
> Remember that every write has to be written twice. So your write
> capacity might suffer a bit.

Yes, that is true. Making network load in domU makes the cpu load twice as 
making the traffic in dom0.


-- 
greetings

eMHa

Attachment: pgpofaacNiV16.pgp
Description: PGP signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>