Skip to content
Matias Bjørling edited this page May 1, 2015 · 21 revisions

Requirements

Check out the Linux kernel git clone https://github.com/OpenChannelSSD/linux.git

Configure it to at least include

CONFIG_NVM=y
CONFIG_NVM_RRPC=y
# For null_blk support
CONFIG_BLK_DEV_NULL_BLK=m
# For NVMe support
CONFIG_BLK_DEV_NVME=m

How to use

Open-Channel SSDs require support in the kernel. As the code is in the process of being upstreamed (lkml), a modified kernel implementing this support must be used. In order to compile the necessary modules see the requirements section.

After the modified kernel is booted and either NVMe module or null_blk module is used, use the following command to initialize a target (FTL) on top of a open-channel compatible device.

echo "rrpc mytarget 0:0" > /sys/block/[nvmeXnY|nullbX]/nvm/configure

rrpc is the target type, and is the default FTL in the kernel. Unless you're implementing your own target, use this.

mytarget is the /dev/mytarget that is being exported.

0:0 is start channel : end channel. This is a range of how many channels of the attached open-channel SSD device that should be allocated to the target.

After successfully registering the target. You may issue reads and writes to /dev/mytarget.

Initialize using null_blk driver

Instantiate the module with the following parameters

queue_mode=2 gb=4 nr_devices=1 nvm_enable=1 nvm_num_channels=1

That will instantiate the LightNVM driver with a 4GB SSD, with a single channel. You can see that it was instantiated by checking the kernel log.

dmesg |grep lightnvm

where the output should be similar to

[    0.359255] nvm: pools: 1
[    0.359773] nvm: blocks: 512
[    0.360364] nvm: pages per block: 256
[    0.361037] nvm: append points: 1
[    0.361663] nvm: append points per pool: 1
[    0.362423] nvm: timings: 25/500/1500
[    0.363081] nvm: target sector size=4096
[    0.363779] nvm: disk flash size=4096 map size=4096
[    0.364665] nvm: allocated 131072 physical pages (524288 KB)

Instantiate NVMe driver using QEMU

If you have a LightNVM-compatible device, simply plug it in and it should be found. If you don't, then you can use the LightNVM-enabled QEMU branch to prototype with.

It's based on top of Keith Busch's qemu-nvme branch that implements an NVMe compatible device.

QEMU Installation

Clone the qemu source from

git clone https://github.com/OpenChannelSSD/qemu-nvme.git

and configure the QEMU source with

./configure --enable-linux-aio --target-list=x86_64-softmmu --enable-kvm

then make and install

Configure QEMU

Create an empty file to hold your NVMe device.

dd if=/dev/zero of=blknvme bs=1M count=1024

this creates a zeroed 1GB file called "blknvme". From there, you can boot your favorite Linux image with

qemu-system-x86_64 -m 4G -smp 1,cores=4 --enable-kvm 
-hda YourLinuxVMFile -append "root=/dev/sda1"
-kernel "/home/foobar/git/linux/arch/x86_64/boot/bzImage"
-drive file=blknvme,if=none,id=mynvme
-device nvme,drive=mynvme,serial=deadbeef,namespaces=1,lver=1,lchannels=1,nlbaf=5,lba_index=3,mdts=10

Where you replace YourLinuxVMFile with your preinstalled Linux virtual machine. LightNVM is enabled with lver=1. The number of LightNVM channels is set to one, and the last part defines the page size to be 4K.

QEMU support the following LightNVM-specific parameters:

- lver=<int>       : version of the LightNVM standard to use, Default:1
- ltype=<nvmtype>  : Whether device is block- or byte addressable, Default:0 (block)
- lchannels=<int>    : Number of channels per namespace, Default: 4
- lreadl2ptbl=<int>  : Load logical to physical table. 1: yes, 0: no. Default: 1
- lbbtable=<file>    : Load bad block table from file destination (Provide path to file. If no file is provided a bad block table will be generation. Look at lbbfrequency. Default: Null (no file).
- lbbfrequency:<int> : Bad block frequency for generating bad block table. If no frequency is provided LNVM_DEFAULT_BB_FREQ will be used.

The list of LightNVM parameters in QEMU can be found in $QUEMU_DIR/hw/block/nvme.c under the Advanced optional options comment.

Common Problems

Kernel panic on boot using NVMe

  1. Zero out your nvme backend file. dd if=/dev/zero of=backend_file bs=1M count=X

  2. Remember to upgrade the qemu-nvme branch as well. The linux and qemu-nvme repos follows each other.

Clone this wiki locally