Open Source Virtualization

Debian Squeeze, Native ZFS on Linux, Proxmox 2.0,

KVM-Kernel Virtual Machine,

and ISCSI Target all working together!

 

This configuration was created in order to get the following running together:

 

Debian Squeeze (Default install with separate partitions for /var /tmp /home / /boot)

(Packages include file server and SSH Server support)

Native ZFS on Linux setup as RAIDz1-Raid 10

Proxmox 2.0 (Web-Based Graphical User Interface)

KVM-Kernel Virtual Machine

ISCSITarget (not to be confused with open ISCSI.)

Here is what our configuration files look like:


cat /etc/hosts

cat /etc/resolv.conf

 

cat /etc/apt/sources.list

cat /etc/sudoers

 

How to install Proxmox 2.0 on Squeeze


aptitude update && aptitude full-upgrade
aptitude install ntp ssh lvm2 postfix

nano /boot/grub/grub.cfg


Verify Proxmox kernel is listed and selected as the default
(or just boot and enter on kernel and change later just make sure its booted)


sudo shutdown –r now
uname -r
2.6.32-6-pve

aptitude install ksm-control-daemon vzprocps build-essential gawk alien fakeroot zlib1g-dev uuid
aptitude install uuid-dev libssl-dev parted proxmox-virtual-environment

 
shutdown -r now

 

GETTING and INSTALLING required SPL package 

GETTING and INSTALLING Turbo Frans Patch for ISCSI and SMBFS support on ZFS

QUESTION TO TURBO

 

You have indicated that that ISCSI Share command is all that’s necessary not editing the

following file: ietd.conf. Is this the correct code that I should leave out?”

Correct. This is done with the ‘zfs set shareiscsi=on’. It actually runs a command (‘ietadm’)

that creates the share, without editing the config file.

The actual command(s) are (see lib/libshare/README_iscsi.txt for more info):

cat /sbin/zfs_share_iscsi.sh  PS. Make sure it’s executable! chmod u+x zfs_share_iscsi.sh 🙂

cat /etc/iet/ietd.conf

service iscsi-target start

parted -l

 

Use these commands to identify the drives that will be used for the RAID10z1


ls -la /dev/disk/by-id 
Thanks Zenny for the find. 
ls -al /dev/disk/by-path 
Thanks Zenny for the find.
ls -al /dev/disk/by-uuid 
Thanks Zenny for the find.


zpool create tank raidz ata-WDC_WD1001FALS-00E3A0_WD-WCATR0096697 ata-WDC_

WD10EACS-07D6B0_WD-WCAU43599448

ata-SAMSUNG_HD204UI_S2H7J1BZ931060 ata-ST31000333AS_6TE0JK2P
sudo zpool status tank

 

zpool status tank


pool: tank
state: ONLINE
scan: none requested


MY CONFIG AS IT APPEARS


Creating compressing, deduping, setting iscsi sharing on


zfs list

cat /etc/network/interfaces 

cat /etc/resolv.conf

enter the following command: post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

In a browser bring up
https://192.168.x.x:8006 (web interface for Proxmox 2.0 uses root username/password by default)

Create LVM storage under folder view—datacenter—storage—add iscsi target


target iqn.2001-04.com.wanfuse.myad2:iscsi1:12345678901234567890


portal 192.168.11.24:3260 (IP address of debian box or iscsi target if not on the
same box)
ID VMs (something unique)
Enable (check off this setting)
use lun directly (uncheck otherwise its one vm per lun only)

 

Create LVM storage under folder view—datacenter—storage–add LVM group
c

lick on add and select Cresate LVM storage

ID VMsEXample (something unique goes here)
Base Storage VMs(ISCSI)
Base Volume CH 00 ID 0 LUN 0 (/etc/iet/ietd.conf shows the lun #)

Volume Group something unique (if you use volume groups it may be necessary to
specify the proper one I don’t know for sure)
Nodes (All nodes no restrictions grayed out anyways)
Enable (Check off)
Shared (not sure what this does)
once configured click on CREATE


for storage of ISOs on a share select Create LVM storage under

folder view—datacenter—storage –Add Directory


give it a name


select a mounted volume directory 


NOTE: You should have a separate disk that you have installed and mounted during the install or

 

Afterward, that is meant to hold ISO files


For content select down arrow (tricky to see) to the right of where it says IMAGES hold down the ctrl

 

key and select ISO and unselect Images you can do the same type of things for a template share and

 

for backups and containers(not sure what containers does at this point.) 


Proxmox has a beautiful VM backup system so creating a LUN or (LVM and ISCSI share) or Just an LVM

 

for backups is definitely worth your while but beyond this article)

Select the nodes that can access default is all nodes
Select enable
Leave shared unchecked (not sure what this does anyways at this point)

format /dev/ sdf1 using mkfs.ext4 /dev/sdf1
mkdir /ISO
mount /dev/sdf1 /ISO
edit /etc/fstab/

(beyond scope of this doc to make the mount permanent with an entry like this
dev/sdf1 /ISO auto rw,user,noauto 0 0 


^–this may vary depending on what you want for inexperience users you should mount the disk during

the install of Debian Squeeze to make it easier being careful to make sure that the disk is not the one

meant for root and /boot or any of the other partitions for that matter for the system.

 

 

 

CREATE VM in top right corner of interface
on the create new virtual machine General tab comes up by default
Node: Select Server
VM ID a unique number 100-1000
VM Name Give it a name

Select OS Type tab
SELECT the proper OS type
Installation Media Tab appears un-grayed after selecting OS Type–select it
select Use CD/DVD disk image file from storage local
(which is the default storage location of disk images—this is changeable )
select the ISO image of the Operating System

OR

select USE PHYSICAL MEDIA (which will be a slower install)
Select the next tab that appears HARD DISK
Select IDE or SCSI but remember the OS must support the driver if its SCSI
Storage Select the LVM VOLUME created from the ISCSI target mapping
Enter a disk size default is 32 Gig select image format (keep at default of raw image)
cache type (no cache) unless you have battery backed ram
Select CPU type
sockets (# of sockets)
Cores (# of cores)
CPU (Default qemu64 for CPU type change to support your cpu type)
Select the MEMORY TAB
Enter enough ram for guest depending on guest and on the amount of ram, you have in system
Select the NETWORKING TAB
—bridge mode to vmbr0— is the default and should be used with my
/etc/network/interfaces configuration

(Other possible configurations of /etc/network/interfaces can be found in this article
http://pve.proxmox.com/wiki/Network_Model)


Select the network card type for windows Realtek and Intel E1000 both work not sure about for Linux VM’s though
SELECT CONFIRM TAB make sure settings are what you want and then click finish to create the VM.

 

If the VM does not start, go to a terminal emulator(for windows or prompt if on Debian box) and type

qm start 100 (or whatever the VM number is )
See the bottom of this article for troubleshooting steps. 

 

Proxmox configuration


LVM Groups with Network Backing


In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM

 

logical volume storage. This is a two-step procedure and can be fully configured via the web interface.
First, add the iSCSI target. (On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access.)
Click ‘Add iSCSI Target’ on the Storage list
As storage name use whatever you want but take care, this name cannot be changed later.
Give the ‘Portal’ IP address or server name and scan for unused targets
disable ‘use LUNs directly’
Click save
Second, add LVM group on this target.
Click ‘Add LVM Group’ on the Storage list
As storage name use whatever you want but take care, this name cannot be changed later.
For ‘Base Storage’, use the drop down menu to select the previously defined iSCSI target.
For ‘Base Volume’ select a LUN
For ‘Volume Group Name’ give a unique name (this name cannot be changed later).
Enable shared use (recommended)

 


Debugging VMs on Proxmox 2.0
qm start 100 (where 100 is the number associated with the VM this tells you KVM side what’s going on

a really handy tool)

netstat -n | grep -i 3260 (search for port used by default for iscsi)

ping google.com from host to make sure you have internet connectivity
pvdisplay (see information on volumes like UUID’s )


ls /etc/iscsi/nodes

zpool status

From the host side which is on the 192.168.x.x network, you need to go to the router and add


a gateway and a route to the 10.10.10.x network otherwise it won’t work.
Static Route
10.10.10.0/24 Select the previously created Gateway on LAN ROUTER to give it the description

 

Routeto10.10.10.x.


Simple as that, now guests can route if you have previously setup /etc/network/interfaces as shown

above