Several weeks ago, I started deploying vQFX on our VMware ESXi lab environment. At this time of writing, Juniper still does not have OVA files for both PFE (Packet Forwarding Engine) and RE (Routing Engine). That means deploying vQFX10K is not as easy as the vMX and vSRX.
Since it took me a few tries, I thought I’d share the steps on how to deploy multiple vQFX10K instances on ESXi. I hope this tutorial will prevent you from pulling your hair out.
Deploying vQFX10K will help with your Juniper certification pursuit and other lab scenarios. However, I didn’t use it for my JNCIA-Junos. I mostly used vMX and vSRX for WebUI practice.
Hardware
I initially deployed the vQFX10K on a Dell PowerEdge R610 at work. It has two Intel X5677 (4c/8t) CPU and 128GB of RAM. The multiple vQFX10K instances have been running pretty smoothly on this hardware.
For this tutorial, I am running it on an older version of my current Intel NUC. Compared to the R610’s specs, this is quite an expensive kit. Even though the price is up there, I like it because of its small form factor.
I highly recommend Intel NUCs for people who have no space for used rack or tower servers. The 10th generation can go up to 64GB of RAM with 6c/12t.
Software
I’m using VMware ESXi version 6.7.0 Update 3, which is running off an 8GB USB flash drive. I’ve been running ESXi off a USB flash since my 2012 ESXi build. It’s a great way to separate and maximize the datastore for VMs.
For this tutorial, I used vQFX10K 18.4R1 release for both PFE (Packet Forwarding Engine) and RE (Routing Engine). Both vQFX10K and vMX require us to deploy two separate VMs. The PFE is the data plane that is responsible for forwarding transit traffic. The RE is the control plane which is the brain of the platform.
Deployment Steps
In this tutorial, I will be deploying one vQFX10K instance. The steps to add more is the same. Some of the steps aren’t necessary if you need just one instance.
You may rearrange some of the steps discussed here. After all, this article is meant to serve as a guide.
1. Download vQFX10K
The first step is to grab copies of both vQFX10K PFE and RE Virtualbox files. These files are available via Juniper’s Software Download site. You will need a Juniper account if you don’t already have one. You may also need to request access to software downloads. I am not sure if it requires an active service contract. Since we are a Juniper customer, I was able to request access to download the files.
Once you’re on the software download site, you need to expand the Application Package section. You will need both the Vagrant Package for Virtualbox (VQFX10K PFE) and Vagrant Package for Virtualbox (VQFX10K Routing Engine) box files. At this time of writing, 18.4R1 release is the current version for evaluation.
2. vSwitch and Port Groups
My goal from the start was to create multiple instances of vMX and vQFX10K. That said, I needed to create a different vSwitch and port groups so it won’t conflict with existing VMs.
Creating a new standard vSwitch
To create a new vSwitch using ESXi host client, you need to click on Networking under the Navigator pane. There you will see multiple tabs like Port groups and Virtual switches. Click on the Virtual switches tab and click on the Add standard virtual switch. Use the same vSwitch settings below.

Creating a new port group
To create a new port group, you need to click on Port groups and click on the Add port group. You will need to create a new port group every time you want to add a new vQFX10K or vMX. This new port group is used to separate the L2 connectivity between the PFE and RE with other vQFX10K instances. Make sure to change the VLAN ID for the new VM instance.

Upload box files to VMware ESXi
To upload both PFE and RE box files, you need to click on Storage under Navigator pane. There you will see multiple tabs like Datastores and Adapters. Click on the Datastore browser and select your desired datastore and upload both box files.
Extract VMDK file
Once the upload of both PFE and RE box files is complete, you need to extract the VMDK files. To do this, you need to use the tar
command to unpack the files.
Note: This assumes that you have SSH enabled on your ESXi server.
[andrew@esxi02:~] cd /vmfs/volumes/nfs1/ [andrew@esxi02:/vmfs/volumes/nfs1] mkdir vQFX-PFE [andrew@esxi02:/vmfs/volumes/nfs1] mkdir vQFX-RE [andrew@esxi02:/vmfs/volumes/nfs1] tar -zxf vqfx-18.4R1.8-pfe-virtualbox.gz -C vQFX-PFE [andrew@esxi02:/vmfs/volumes/nfs1] tar -zxf vqfx-18.4R1.8-re-virtualbox.gz -C vQFX-RE [andrew@esxi02:/vmfs/volumes/nfs1] ls -l vQFX* vQFX-PFE: total 1048364 -rw-r--r-- 1 501 20 258 Mar 12 2018 Vagrantfile -rw-r----- 1 501 20 9538 Mar 12 2018 box.ovf -rw-r--r-- 1 501 20 26 Mar 12 2018 metadata.json -rw-rw---- 1 501 20 1073497600 Mar 12 2018 packer-virtualbox-ovf-1520878605-disk001.vmdk vQFX-RE: total 438916 -rw-r--r-- 1 501 20 258 Jan 11 2019 Vagrantfile -rw-r--r-- 1 501 20 13256 Jan 11 2019 box.ovf -rw-r--r-- 1 501 20 26 Jan 11 2019 metadata.json -rw-r--r-- 1 501 20 449420800 Jan 11 2019 packer-virtualbox-ovf-1547241286-disk001.vmdk
Optional: Rename the VMDK files.
I like to rename the VMDK files so they’re more descriptive.
[andrew@esxi02:/vmfs/volumes/nfs1] mkdir vQFX-RE01 [andrew@esxi02:/vmfs/volumes/nfs1] mv vQFX-RE/packer-virtualbox-ovf-1547241286-disk001.vmdk vQFX-RE/vqfx-re.vmdk [andrew@esxi02:/vmfs/volumes/nfs1] mv vQFX-PFE/packer-virtualbox-ovf-1520878605-disk001.vmdk vQFX-PFE/vqfx-pfe.vmdk [andrew@esxi02:/vmfs/volumes/nfs1] ls -l vQFX* | grep .vmdk -rw-rw---- 1 501 20 1073497600 Mar 12 2018 vqfx-pfe.vmdk -rw-r--r-- 1 501 20 449420800 Jan 11 2019 vqfx-re.vmdk
Create vQFX10K RE instance
Before you can create a RE instance, you need to do some preliminary steps.
1. Create a directory
The first step is to create a directory where we want our VM files to reside. In this case, I created a vQFX-RE01
in my NFS1 datastore.
[andrew@esxi02:/vmfs/volumes/nfs1] mkdir vQFX-RE01
2. Convert VMDK file
The next step is to convert the VMDK file to thin provisioned.
[andrew@esxi02:/vmfs/volumes/nfs1] vmkfstools -i vQFX-RE/vqfx-re.vmdk -d thin vQFX-RE01/vqfx-re.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'vQFX-RE/vqfx-re.vmdk'... Clone: 100% done.
3. View OVF file
Next, you need to know the specs to deploy the VM instance. You can do this by viewing the OVF file.
andrew@esxi02:/vmfs/volumes/nfs1] cat vQFX-RE/box.ovf | grep -E '(CPU|memory|OSType)' <vbox:OSType ovf:required="false">FreeBSD</vbox:OSType> <rasd:Caption>1 virtual CPU</rasd:Caption> <rasd:Description>Number of virtual CPUs</rasd:Description> <rasd:ElementName>1 virtual CPU</rasd:ElementName> <rasd:Caption>1024 MB of memory</rasd:Caption> <rasd:ElementName>1024 MB of memory</rasd:ElementName> <vbox:Machine ovf:required="false" version="1.15-macosx" uuid="{d3e98788-7b29-473d-a2cf-879962f7a893}" name="packer-virtualbox-ovf-1547241286" OSType="FreeBSD" snapshotFolder="Snapshots" lastStateChange="2019-01-11T21:30:41Z"> <CPU> </CPU>
As you can see, you’ll need 1 x vCPU, 1GB of RAM, and FreeBSD as the OS type. With this information, you’re now ready to deploy the vQFX10K RE VM.
4. Create vQFX10K RE VM
To create vQFX10K RE VM, you need to click on Virtual Machines under Navigation pane and click on the Create / Register VM. ESXi will present you with a new virtual machine window. Just hit the Next button since the Create a new virtual machine is selected, by default.
The next window will ask you the name of the VM, compatibility, guest OS family, and guest OS version. Use the following settings from the screenshot below.

Next, ESXi is going to ask which datastore you want your VM to reside. If you only have one, then hit Next. If you have multiple datastores, then select the one you want to use and hit the Next button.
The next window is where you can customize the settings for your RE VM instance. There are multiple settings that you’re going to change here.
The first setting you’re going to change is the hard disk. You need to remove the preconfigured virtual disk and replace it with the cloned VMDK file. Replace the Controller location setting to IDE controller 0.

The next setting you’re going to change is the network adapters. For the first network adapter, select the proper port group that you want to use. This vNIC is for the out-of-band management interface.
After configuring the first network adapter, you now need to add a second network adapter. This vNIC is for the L2 connectivity between RE and PFE.
Note: Make sure to select the E1000 network adapter type for both vNICs.

Optional: If you’re using ESXi Enterprise Plus, you can add a serial port device, which allows you to console into the VM without logging into ESXi. This step assumes that ESXi is configured to accept the connection.

Create vQFX10K PFE instance
The steps to create the PFE VM instance is pretty much the same as the RE VM instance. For completeness sake, I am going to demonstrate so you can follow along.
1. Create a directory
The first step is to create a directory where we want our VM files to reside. In this case, I created a vQFX-PFE01
in my NFS1 datastore.
[andrew@esxi02:/vmfs/volumes/nfs1] mkdir vQFX-PFE01
2. Convert VMDK file
The next step is to convert the VMDK file to thin provisioned.
[andrew@esxi02:/vmfs/volumes/nfs1] vmkfstools -i vQFX-PFE/vqfx-pfe.vmdk -d thin vQFX-PFE01/vqfx-pfe.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'vQFX-PFE/vqfx-pfe.vmdk'... Clone: 100% done.
3. View OVF file
Next, you need to know the specs to deploy the VM instance. You can do this by viewing the OVF file.
[andrew@esxi02:/vmfs/volumes/nfs1] cat vQFX-PFE/box.ovf | grep -E '(CPU|memory|OSType)' <vbox:OSType ovf:required="false">Ubuntu_64</vbox:OSType> <rasd:Caption>1 virtual CPU</rasd:Caption> <rasd:Description>Number of virtual CPUs</rasd:Description> <rasd:ElementName>1 virtual CPU</rasd:ElementName> <rasd:Caption>2048 MB of memory</rasd:Caption> <rasd:ElementName>2048 MB of memory</rasd:ElementName> <vbox:Machine ovf:required="false" version="1.16-macosx" uuid="{56cc6edd-6efe-4c8e-ad21-cd14fe5798be}" name="packer-virtualbox-ovf-1520878605" OSType="Ubuntu_64" snapshotFolder="Snapshots" lastStateChange="2018-03-12T18:17:47Z"> <CPU> </CPU>
As you can see, you’ll need 1 x vCPU, 2GB of RAM, and Ubuntu 64-bit as the OS type. With this information, you’re now ready to deploy the vQFX10K PFE VM.
4. Create vQFX10K PFE VM
To create vQFX10K PFE VM, you need to click on Virtual Machines under Navigation pane and click on the Create / Register VM. ESXi will present you with a new virtual machine window. Just hit the Next button since the Create a new virtual machine is selected, by default.
The next window will ask you the name of the VM, compatibility, guest OS family, and guest OS version. Use the following settings from the screenshot below.

Next, ESXi is going to ask which datastore you want your VM to reside. If you only have one, then hit Next. If you have multiple datastores, then select the proper one and hit the Next button.
The next window is where you can customize the settings for your PFE VM instance. There are multiple settings that you’re going to be changing here. The first setting you’re going to change is the hard disk. You need to remove the preconfigured virtual disk and replace it with the cloned VMDK file. Replace the Controller location setting to IDE controller 0.

The next setting you’re going to change is the network adapters. For the first network adapter, select the proper port group that you want to use. This vNIC is for the out-of-band management interface.
After configuring the first network adapter, you now need to add a second network adapter. This vNIC is for the L2 connectivity between RE and PFE.
Next, you’ll need to add a few more network adapters. These network adapters are what some folks will call revenue ports. As you can see, I only added four network adapters. You can add more if you want.
Note: Make sure all your network adapters are using E1000 as the adapter type.

Optional: Add a serial port device to the PFE VM.

Power on VMs
To power on the VMs, you can select both of them and turn them on at the same time.
Once the RE VM is finished booting, then you should see something like this.

Now, if you turn your attention to the PFE VM, you’ll notice that the PFE VM appears to be stuck.

However, if you try to access it via the serial port device, then you will see it’s actually in a good state. Don’t worry if you’re not running Enterprise Plus license since you can verify the state of the PFE VM later.

Note: If you want to log into the vQFX10K PFE VM, then use the root/no
credentials. Though, you don’t ordinarily need to log into this VM. If you’re concerned about security, you may want to change the password.
Logging in
I like to log in via the serial port, so I use the telnet
command to interact with the VM. Alternatively, you can use ESXi/vSphere/Fusion to interact with the vQFX10K RE VM. Use root/Juniper
credentials to log in.
andrew@networkjutsu$ telnet esxi02 5001 Trying 192.168.100.102... Connected to esxi02.networkjutsu.local. Escape character is '^]'. vqfx-re (ttyd0) login: root Password: Juniper --- JUNOS 18.4R1.8 built 2018-12-17 03:30:15 UTC root@vqfx-re:RE:0%
Verify RE and PFE connectivity
Once logged in, you may want to verify that the RE and PFE can communicate with each other via the internal bridge. This internal bridge is using the vQFX01 port group that you created earlier.
root@vqfx-re:RE:0% cli {master:0} root@vqfx-re> show chassis fpc Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%) Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer 0 Online Testing 1 0 0 0 0 1024 0 84 1 Empty 2 Empty 3 Empty 4 Empty 5 Empty 6 Empty 7 Empty 8 Empty 9 Empty
Another way to verify that the RE can communicate with the PFE is by checking to see what revenue ports are available. Since I only added four network adapters on the PFE VM, then the only usable ports are from xe-0/0/0
to xe-0/0/3
.
root@vqfx-re> show interfaces terse | match xe xe-0/0/0 up up xe-0/0/0.0 up up inet xe-0/0/1 up up xe-0/0/1.0 up up inet xe-0/0/2 up up xe-0/0/2.0 up up inet xe-0/0/3 up up xe-0/0/3.0 up up inet xe-0/0/4 up up xe-0/0/4.0 up up inet xe-0/0/5 up up xe-0/0/5.0 up up inet xe-0/0/6 up up xe-0/0/6.0 up up inet xe-0/0/7 up up xe-0/0/7.0 up up inet xe-0/0/8 up up xe-0/0/8.0 up up inet xe-0/0/9 up up xe-0/0/9.0 up up inet xe-0/0/10 up up xe-0/0/10.0 up up inet xe-0/0/11 up up xe-0/0/11.0 up up inet
10. Device management
The out-of-band management interface names vary from one platform to another. For the vQFX10K, the out-of-band management interface isem0
By default, the em0
interface has DHCP enabled. Depending on your network, you may have to disable DHCP and assign a static IP address. In my case, the em0
is in a VLAN where DHCP is disabled.
To statically assign an IP address on the management interface, use the following command.
root@vqfx-re> configure Entering configuration mode {master:0}[edit] root@vqfx-re# edit interfaces em0 {master:0}[edit interfaces em0] root@vqfx-re# delete unit 0 family inet dhcp {master:0}[edit interfaces em0] root@vqfx-re# set unit 0 family inet address 192.168.1.60/24 {master:0}[edit interfaces em0] root@vqfx-re# commit configuration check succeeds commit complete
Verify that it’s working properly by pinging from a host within the same network.
andrew@networkjutsu$ ping 192.168.1.60 -c 2 PING 192.168.1.60 (192.168.1.60): 56 data bytes 64 bytes from 192.168.1.60: icmp_seq=0 ttl=63 time=24.013 ms 64 bytes from 192.168.1.60: icmp_seq=1 ttl=63 time=29.325 ms --- 192.168.1.60 ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 24.013/26.669/29.325/2.656 ms
Dedicated management VRF
If you want to reach the out-of-band management from a different network, then you need to create a separate routing instance. By default, there is no clear separation between out-of-band management and in-band traffic.
To enable a dedicated management virtual routing and forwarding (VRF) instance, use the command below. This command will use the dedicated mgmt_junos
management instance, which is reserved and hardcoded.
{master:0}[edit interfaces em0] root@vqfx-re# top {master:0}[edit] root@vqfx-re# set system management-instance
Once you commit the change, Junos OS will automatically add the routing instance.
{master:0}[edit] root@vqfx-re# commit configuration check succeeds commit complete {master:0}[edit] root@vqfx-re# run show route all | match mgmt_junos mgmt_junos.inet.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
Now, you’re ready to add a static route for the management VRF. It will allow you to reach the management interface from another network. In this scenario, I’ve added just the 192.168.0.0/16 network. Alternatively, you could add 0.0.0.0.
{master:0}[edit] root@vqfx-re# set routing-instances mgmt_junos routing-options static route 192.168.0.0/16 next-hop 192.168.1.1 {master:0}[edit] root@vqfx-re# commit configuration check succeeds commit complete
You can verify by pinging a host on another network from your vQFX instance. When the verification of connectivity is successful, then you’re ready to play with your vQFX10K.
{master:0}[edit] root@vqfx-re# run ping 192.168.100.250 routing-instance mgmt_junos count 2 PING 192.168.100.200 (192.168.100.250): 56 data bytes 64 bytes from 192.168.100.250: icmp_seq=0 ttl=63 time=0.744 ms 64 bytes from 192.168.100.250: icmp_seq=1 ttl=63 time=0.630 ms --- 192.168.100.200 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.630/0.687/0.744/0.057 ms
Final Thoughts
As demonstrated, deploying vQFX10K is a little more involved than vSRX or vMX. If you’re trying to study for the JNCIA-Junos exam, then you may want to stick with vSRX. You’ll save time by using the OVA file. On top of that, it’s only one VM to deploy.
If you’re studying for a higher-level Juniper cert exam, then you’ll probably want to deploy vQFX VM(s). The vQFX10K will allow you to practice switching related commands that will aid in passing the test.
You might like to read
Passed JNCIA-Junos – My First Juniper Certification
BUY ME COFFEE ☕
Disclosure
AndrewRoderos.com is a participant of a few referral programs, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to company websites.