Juniper vSRX (Virtual SRX) under KVM on Centos 6.5
This is a post on how to set up Juniper's virtual SRX on KVM hypervisor (QEMU hardware virtualization manager).
IIRC, KVM is part of the CentOS/Red Hat Enterprise Linux distributions since version 6, replacing XEN.
1. I'll begin this guide with first step: Downloading CentOS and vSRX images
CentOS LiveCD location:
http://mirrors.viralvps.com/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-LiveCD.isovSRX location: See
http://www.juniper.net/2. Burn and boot the CentOS and on the "Automatic Boot in 8 seconds" press down and select "Install"
3. Set Hostname, TimeZone, root password.
4. Storage custom layout for me:
Centos drive layout in my case is:
1. sda 200MB /boot ext3 - format.
1. sdb 20G rootVG for the hypervisor - 20G - rootVG (19G rootLV for the root file system and 1G for swap). Choose to format the / mountpoint (rootLV).
2. sdc VolumeGroup for VMs - 128G - vmVG with a simple logical volume "vmLV" to hold VM images.
Few comments here: the /boot filesystem needs to be "ext3" filesystem for some older systems otherwise there will be a mount error. Check
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html for more info.
the / and /boot mountpoints need to be formated so check the option for this.
5. Create a remote management user. "admin" in my case. I'm using a generic user as this system is not Internet facing and it does not present a high security risk. Use of well known usernames (dictionary usernames) like "admin" for public servers should be avoided as these are highly exploited by crackers.
6.Enable SSH access and (for testing purposes) root login:
Code:
# who -r
run-level 5
#chkconfig --list | grep ssh
sshd 5:off
#chkconfig sshd --levels 5 on
#chkconfig --list | grep ssh
sshd 5:on
# vim /etc/ssh/sshd_config
PermitRootLogin yes
# /etc/init.d/sshd start
Again: This system is in an isolated environment. Do not enable root login via ssh, do not allow ssh connections from Internet.
7. Once CentOS has been installed and running, install KVM, libvirt, qemu and other packages required to setup
# yum install kvm qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools tunctl -y
8. Prepare networking part.
It would seem that Centos 6.5 NetworkManager service does not yet support bridging so it is necessary to use the classic network init scripts for the bridge.
Disable NetworkManager and enable the "network" service:
Code:
# chkconfig NetworkManager off
# chkconfig network on
# service NetworkManager stop
# service network start
Code:
# cd /etc/sysconfig/network-scripts/
# cat ifcfg-eth0
ifcfg-eth0
DEVICE=eth0
HWADDR=00:17:f2:0b:2b:6c
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=no
# cat ifcfg-br0
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
IPADDR=10.210.3.4
PREFIX=23
GATEWAY=10.210.2.1
DNS1=102.10.2.254
DEFROUTE=yes
IPV6INIT=no
ONBOOT=yes
DOMAIN="domain.com"
DELAY=0
NM_CONTROLLED=no
# cat ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BRIDGE=br1
NM_CONTROLLED=no
# cat ifcfg-br1
DEVICE=br1
TYPE=Bridge
BOOTPROTO=static
IPV6INIT=no
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
Few notes here:
Keep the "NM_CONTROLLED=no" for eth0/eth1/br0/br1 interfaces. Restart and check network service:
Code:
# service network restart
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.0017f20b2b6c no eth0
br1 8000.0017f20b2b6d no eth1
So far so good. eth0 and eth1 are assigned to their respective bridges.
9. Preparing storage pools for the virtual machines.To recap, my storage layout consists of the root Volume Group for the hypervisor and the a "vmVG" to host virtual machine storage.
Code:
# vgs
VG #PV #LV #SN Attr VSize VFree
rootVG 1 2 0 wz--n- 20.00g 0
vmVG 1 1 0 wz--n- 128.81g 0
# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
rootLV rootVG -wi-ao---- 18.55g
swapLV rootVG -wi-ao---- 1.44g
vmLV vmVG -wi-a----- 128.81g
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 rootVG lvm2 a-- 20.00g 0
/dev/sda3 vmVG lvm2 a-- 128.81g 0
I will be using the "vmLV" logical volume as an ext4 filesystem that will host virtual machine img disks.
Code:
# mkfs.ext4 /dev/vmVG/vmLV
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
8445952 inodes, 33767424 blocks
1688371 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1031 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
# e2label /dev/vmVG/vmLV vmLV
# mount LABEL=vmLV /var/lib/libvirt/pools/pool1
# grep vmLV /etc/fstab
LABEL=vmLV /var/lib/libvirt/pools/pool1 ext4 defaults 1 3
So the filesystem is ready. It's time to create a qemu storage pool using "dir" type:
Code:
# vim /etc/libvirt/qemu/pool/pool1.xml
<pool type="dir">
<name>pool1</name>
<target>
<path>/var/lib/libvirt/pools/pool1</path>
</target>
</pool>
# vim /etc/libvirt/qemu/pool/pool1.xml
<pool type="dir">
<name>pool1</name>
<target>
<path>/var/lib/libvirt/pools/pool1</path>
</target>
</pool>
10. Creating the virtual machine using the vSRX jva file:Copy the JVA virtual SRX file onto the host in any directory you may choose and install the VM:
Code:
# bash junos-vsrx-12.1X46-D10.2-domestic.jva VM1 -s pool1 -i 2:virtio:br0,br1
Accept?[y/n]y
Extracting ...
Checking existence of VM VM1 ...
HOST = , storage = pool1, vm_name = VM1, img = junos-vsrx-12.1X46-D10.2-domestic-1387348130/junos-vsrx-12.1X46-D10.2-domestic.img
Checking existence of storage pool pool1 ...
pool1 active no
Getting storage path ...
Storage path: /var/lib/libvirt/pools/pool1
/root/junos-vsrx-12.1X46-D10.2-domestic-1387348130
SHA1(junos-vsrx-12.1X46-D10.2-domestic.img)= 9dd2390cc79b554360ec7c12e7ca63e9b781e783
-rw-r--r--. 1 17105 950 260M Dec 18 07:29 junos-vsrx-12.1X46-D10.2-domestic-1387348130/junos-vsrx-12.1X46-D10.2-domestic.img
cp junos-vsrx-12.1X46-D10.2-domestic-1387348130/junos-vsrx-12.1X46-D10.2-domestic.img /var/lib/libvirt/pools/pool1/VM1.img
Checking host CPU features ...
Creating VM on the host ...
Domain VM1 defined from VM1.xml
Checking the VM ...
- VM1 shut off
One problem here: The jva file deploys the networks as "network" type instead of "bridge". For this, the xml file needs to be fixed and reused to define the virtual machine.
Modify and backup the VM1.xml file as below:
Code:
43c43
< <interface type='network'>
---
> <interface type='bridge'>
45c45,46
< <source network='br0'/>
---
> <source bridge='br0'/>
> <target dev='vnet0'/>
49c50
< <interface type='network'>
---
> <interface type='bridge'>
51c52,53
< <source network='br1'/>
---
> <source bridge='br1'/>
> <target dev='vnet1'/>
Change the source type to "bridge" and add the targets "vnet0" and "vnet1". Save the /etc/libvirt/qemu/VM1.xml file to /etc/libvirt/qemu/VM1.xml.bkp
Delete the machine and recreate it using the above xml (renamed after the VM was undefined):
Code:
# virsh create /etc/libvirt/qemu/VM1.xml
Domain VM1 created from /etc/libvirt/qemu/VM1.xml
The VM started automaticaly:
Code:
# virsh list --all
Id Name State
----------------------------------------------------
24 VM1 running
Now the bridge configuration displays the br0 and br1 bridges containing eth0/vnet0 and eth1/vnet1 interfaces. The ethX interfaces are physical devices whereas the vnet0/1 are the interfaces assigned to the virtual machines for direct access to the bridge.
11. Open your VM's console and configure the virtual SRX:
Code:
# virsh console VM1
Connected to domain VM1
Escape character is ^]
XXXhostnameXXX (ttyd0)
login: root
--- JUNOS 12.1X46-D10.2 built 2013-12-18 02:43:42 UTC
root@XXXhostnameXXX%
root@XXXhostnameXXX> show interfaces terse | match ge-
ge-0/0/0 up up
ge-0/0/0.0 up up inet
ge-0/0/1 up up
ge-0/0/1.0 up up inet
root@XXXhostnameXXX> configure
Entering configuration mode
The configuration has been changed but not committed
[edit]
root@XXXhostnameXXX# set system host-name VM1-vSRX
[edit]
root@XXXhostnameXXX# set system root-authentication plain-text-password
New password:
Retype new password:
root@XXXhostnameXXX# set interfaces ge-0/0/0.0 family inet address 10.210.3.5/23
root@XXXhostnameXXX# set routing-options static route 0/0 next-hop 10.210.2.1
root@XXXhostnameXXX# set security zones security-zone trust interfaces ge-0/0/0.0 host-inbound-traffic system-services ping
root@XXXhostnameXXX# commit
root@VM1-vSRX# run ping 10.210.2.1
PING 10.210.2.1 (10.210.2.1): 56 data bytes
64 bytes from 10.210.2.1: icmp_seq=0 ttl=64 time=7.438 ms
^C
--- 10.210.2.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 7.438/7.438/7.438/0.000 ms
And the VM is accessible now.
This tutorial is not bulletproof. Debate remains on wheter "e1000" or "virtio" drivers should be used.