Now that you have a toolchain in place let's start by cross-compiling the Linux kernel with the default configuration. First clone the riscv-linux repository…
git clone --depth=1 https://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git <linux sources dir>
Then configure it with the default configuration. That includes virtio support so that you can use this kernel with QEMU later on.
cd <linux sources dir> ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make defconfig
And finally build the kernel…
ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make
You should now have the kernel image as vmlinux, you may verify its compiled for RISC-V using the file command
vmlinux: ELF 64-bit LSB executable, UCB RISC-V, version 1 (SYSV), statically linked, BuildID[sha1]=..., not stripped
In order to boot the kernel you just build you'll need a bootloader that supports the RISC-V Linux boot protocol and Supervisor Binary Interface (SBI). The Berkeley Boot Loader or BBL will do the trick.
git clone --depth=1 https://github.com/riscv/riscv-pk.git <bbl sources dir> cd <bbl sources dir> mkdir build cd build ../configure --enable-logo --host=riscv64-unknown-linux-gnu make
If you want you can also link the kernel and the bootloader together to create a simple binary for systems/emulators that can't distinguish between them (they just expect to run a binary). To do that you may pass the –with-payload argument to the configure script, pointing to the kernel image.
../configure --enable-logo --host=riscv64-unknown-linux-gnu \ --with-payload=<linux sources dir>/vmlinux
A more recent and better maintained alternative to BBL, meant to be the reference SBI implementation, is OpenSBI. It can be used both as a library to be used by boot loaders and as a standalone Linux boot loader. It supports various targets including QEMU.
git clone --depth=1 https://github.com/riscv/opensbi.git <osbi sources dir> cd <osbi sources dir> CROSS_COMPILE=riscv64-unknown-linux-gnu- PLATFORM_RISCV_XLEN=<32 or 64> \ make PLATFORM=<target name, e.g. qemu/virt>
You may also specify a built-in payload like on BBL, using the FW_PAYLOAD_PATH env variable during make.
CROSS_COMPILE=riscv64-unknown-linux-gnu- PLATFORM_RISCV_XLEN=<32 or 64> \ make PLATFORM=<target name, e.g. qemu/virt> FW_PAYLOAD_PATH=<linux sources dir>/arch/riscv/boot/Image
In the first case the resulting binary will be called fw_jump.elf and in the second case fw_payload.elf, also .bin files with the same names will be generated, for using them directly on hw boards (QEMU only supports loading ELFs). These files are available at <osbi sources dir>/platform/<platform name>/firmware/.
In order to test the kernel and the various kernel modules from the process above, you need to create a simple rootfs. Let's start by creating a directory and populating it with a typical directory layout…
mkdir -p <rootfs dir>/{bin,sbin,dev,proc,sys,etc,var/run,/lib/firmware,mnt}
…and install the kernel modules in there.
INSTALL_MOD_PATH=<rootfs dir> ARCH=riscv make modules_install
Busybox is a set of common UNIX tools and services combined together on a single program, that can also be statically compiled to provide a system-in-a-box, without any external dependencies. Since it's a whole suite of tools, its configuration process is similar to that of the kernel. You may use make menuconfig and work your way through, use make defconfig to get a full-featured build (remember to edit the configuration and properly set CONFIG_PREFIX to <rootfs dir>), or you may grab busybox-config.gz and use that instead.
git clone -b 1_29_stable --depth=1 git://busybox.net/busybox.git <busybox sources dir> cd <busybox sources dir> zcat busybox-config.gz | sed s#"../../rootfs"#<rootfs dir>#g > .config ARCH=riscv make make install
This will install BusyBox on the rootfs directory and create the symbolic links to the BusyBox binary. All that remains now is to let Linux use BusyBox as the init program, to do that you need to add a symbolic link from <rootfs dir>/init to the BusyBox binary, create an init script and an inittab. To save time here is an example of an init script
#!/bin/bash # Mount /dev, /proc and /sys mount -t devtmpfs none /dev mount -t proc -o nodev,noexec,nosuid proc /proc mount -t sysfs -o nodev,noexec,nosuid sysfs /sys # Initialize networking ip link set lo up ip addr add 127.0.0.1 dev lo ip route add 127.0.0.0/255.0.0.0 dev lo ip link show eth0 | grep eth0 &> /dev/null if [[ $? == 0 ]]; then ip link set eth0 up fi echo -e "\t\tWelcome to RISC-V !" echo "$(uname -s -v -r -p -m)" exec setsid cttyhack /bin/bash
And an example of an inittab that assumes you've put the above init script on <rootfs dir>/sbin/initscript
::sysinit:/sbin/initscript ::shutdown:/bin/umount -a -r > /dev/null ::restart:/sbin/init
BusyBox expects to find the above inittab file on /etc/inittab so put it on <rootfs dir>/etc/inittab and you are done. Finally make the symbolic link for <rootfs dir>/init
cd <rootfs dir> ln -s ./bin/busybox ./init
Since <rootfs dir> is now populated and ready, lets make an initramfs image out of it. Initramfs images are cpio images, compressed with one of the kernel-supported compression algorithms, of them xz is probably the most efficient so that's what we'll be using here.
cd <rootfs dir> find . find . -print0 | cpio --null -ov --format=newc > /tmp/initramfs.cpio cd /tmp/ xz -e -T0 --check=crc32 --lzma2=dict=32MiB initramfs.cpio mv initramfs.cpio.xz initramfs.img
Copy the initramfs.img file from /tmp to anywhere you want, you may now test your Bootloader + Linux kernel + initramfs setup with an emulator.
BusyBear is a small Linux environment that utilizes BusyBox and Dropbear. The build process will do most of the above for you, you may follow the instructions here