Skip to content
Matias Bjørling edited this page Apr 9, 2015 · 21 revisions

How to use

Open-Channel SSDs require support in the kernel. As the code is still to be sent upstream, a modified kernel must be used that implements the support. To compile the necessary modules. See the requirements section.

After the modified kernel is booted, and either NVMe module or null_blk module is used. Use the following command to initialize a target (FTL) on top of a open-channel compatible device.

echo "rrpc mytarget 0:0" > /sys/block/[nvmeXnY|nullbX]/nvm/register

rrpc is the targetname. rrpc is the default FTL in the kernel. Unless you're implementing your own target, use this.

mytarget is the /dev/name that is being exported.

0:0 is start channel : end channel. This is a range of how many channels of the attached open-channel SSD device that should be allocated to the target.

After successfully registering the target. You may issue reads and writes to /dev/mytarget.

Requirements

Check out the Linux kernel

git clone https://github.com/OpenChannelSSD/linux.git

Configure it to at least include

CONFIG_BLK_DEV_NVM=y
CONFIG_NVM=y
CONFIG_NVM_RRPC=y
# For null_blk support
CONFIG_BLK_DEV_NULL_BLK=m
# For NVMe support
CONFIG_BLK_DEV_NVME=m

Initialize using null_blk driver

Instantiate the module with the following parameters

queue_mode=2 gb=4 nr_devices=1 lightnvm_enable=1 lightnvm_num_channels=1 bs=4096

That will instantiate the LightNVM driver with a 4GB SSD, with a single channel. You can see that it was instantiated by checking the kernel log.

dmesg |grep lightnvm

where the output should be similar to

[    0.359255] lightnvm: pools: 1
[    0.359773] lightnvm: blocks: 512
[    0.360364] lightnvm: pages per block: 256
[    0.361037] lightnvm: append points: 1
[    0.361663] lightnvm: append points per pool: 1
[    0.362423] lightnvm: timings: 25/500/1500
[    0.363081] lightnvm: target sector size=4096
[    0.363779] lightnvm: disk flash size=4096 map size=4096
[    0.364665] lightnvm: allocated 131072 physical pages (524288 KB)
[    0.365740]  nullb0: unknown partition table`

Instantiate NVMe driver using QEMU

If you have a LightNVM-compatible device, simply plug it in and it should be found. If you don't, then you can use the LightNVM-enabled QEMU branch to prototype with.

It's based on top of Keith Busch's qemu-nvme branch that implements an NVMe compatible device.

QEMU Installation

Clone the qemu source from

git clone https://github.com/OpenChannelSSD/qemu-nvme.git

and configure the QEMU source with

./configure --enable-linux-aio --target-list=x86_64-softmmu --enable-kvm

then make and install

Configure QEMU

Create an empty file to hold your NVMe device.

dd if=/dev/zero of=blknvme bs=1M count=1024

this creates a zeroed 1GB file called "blknvme". From there, you can boot your favorite Linux image with

qemu-system-x86_64 -m 4G -smp 1,cores=4 --enable-kvm 
-hda YourLinuxVMFile -append "root=/dev/sda1"
-kernel "/home/foobar/git/linux/arch/x86_64/boot/bzImage"
-drive file=blknvme,if=none,id=mynvme
-device nvme,drive=mynvme,serial=deadbeef,namespaces=1,lver=1,lchannels=1,nlbaf=5,lba_index=3,mdts=10

Where you replace YourLinuxVMFile with your preinstalled Linux virtual machine. LightNVM is enabled with lver=1. The number of LightNVM channels is set to one, and the last part defines the page size to be 4K.

Common Problems

Kernel panic on boot using NVMe

  1. Zero out your nvme backend file. dd if=/dev/zero of=backend_file bs=1M count=X

  2. Remember to upgrade the qemu-nvme branch as well. The linux and qemu-nvme repos follows each other.

Clone this wiki locally