Bootable FreeBSD image on CompactFlash Card

One of the hacking toys I have is an Alix 1.C board which runs FreeBSD (9.1 at the time of writing). I started out net-booting the board but I thought it would be nice to boot the Alix board from a CompactFlash (CF) card I had laying around, too. In fact, I wanted to create a bootable image for the CF card which I could transfer to the CF card via dd(1).

Pre-requisites

Before the show could start, I needed to find out the size of the CF card I had. I did this by writing all 0s to the card.

root@T61p:~# dd if=/dev/zero of=/dev/sdb bs=512
dd: writing `/dev/sdb': No space left on device
1000945+0 records in
1000944+0 records out
512483328 bytes (512 MB) copied, 1099.51 s, 466 kB/s

I did this by inserting the card in a laptop through an PCCard adapter. The laptop on which I do my hacking runs Linux (Ubuntu 12.10 at the time of writing), so the above is the output of a Linux command. So what this revealed is that the card has 1,000,944 blocks of 512 Bytes which is a total capacity of 512,483,328 Bytes or 488.74 MBytes.

Setting up the Partition Table

Now that we know the size of the CF card, we can create the bootable image. The first step is to create a file of exactly the size as the CF card. This can be done simply by running a command like this:

[phs@freebsd ~]$ dd if=/dev/zero of=freebsd-alix.img bs=512 count=1000944
1000944+0 records in
1000944+0 records out
512483328 bytes transferred in 18.620906 secs (27521933 bytes/sec)

Next, we need a virtual disk based on that image to work with. The solution is to create memory disk as described in the FreeBSD handbook’s section 19.13 on virtual disks. As root, do this:

freebsd# mdconfig -a -t vnode -f /home/phs/freebsd-alix.img -u 0

This creates /dev/md0. Of course, nothing’s on that disk, yet. The first thing to do is creating a partition table.

freebsd# gpart create -s gpt md0
md0 created

This creates an empty UEFI-compatible GPT partition table, but no partitions, yet. You can examine the partition table like this:

freebsd# gpart show
(...)
=>     34  1000877  md0  GPT  (488M)
       34  1000877       - free -  (488M)

In order to boot FreeBSD from the image later, at least two partitions are required: A partition of 512K containing the boot code, and one partition containing the actual operating system. Because I wrote an I2C device driver for the Alix 1.C, I also want a swap partition so I have some space for kernel dumps.

freebsd# gpart add -t freebsd-boot -l boot -s 512K md0
md0p1 added
freebsd# gpart add -t freebsd-swap -l swap -s 64M md0
md0p2 added

Before adding the third partition for the FreeBSD operating system, find out how much space is left on the device by examining the partition table:

freebsd# gpart show md0
=>     34  1000877  md0  GPT  (488M)
       34     1024    1  freebsd-boot  (512k)
     1058   131072    2  freebsd-swap  (64M)
   132130   868781       - free -  (424M)

Ok, now create the data partition:

freebsd# gpart add -t freebsd-ufs -l rootfs -s 868781 md0
md0p3 added

Embedding the boot code

Now that the partition structure is laid out, it’s time to make the image bootable. FreeBSD’s gpart(8) manual page offers great information on what those files are and do: pmbr is a "protective MBF" which allows legacy, BIOS-based systems to boot the image. The MBR code searches the GPT for a freebsd-boot partition and starts the secondary boot code from there. In this case, that’s gptboot which searches the GPT for a freebsd-ufs partition from which it loads /boot/loader.

The following command writes the boot code to the image. Note that I’m using the files from /usr/obj, i.e. the ones most recently build from source and not the one’s currently installed.

freebsd# gpart bootcode -b /usr/obj/usr/src/sys/boot/i386/pmbr/pmbr
  -p /usr/obj/usr/src/sys/boot/i386/gptboot/gptboot -i 1 md0
bootcode written to md0

Initializing the root file system partition

The third partition which is supposed to store the operating system must be initialized with a file system before it can be written.

freebsd# newfs -L FreeBSD -U /dev/gpt/rootfs 
/dev/gpt/rootfs: 424.2MB (868776 sectors) block size 32768, fragment size 4096
        using 4 cylinder groups of 106.06MB, 3394 blks, 13696 inodes.
        with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 217408, 434624, 651840

This creates an empty UFS2-based file system with soft-updates enabled on the partition which can be mounted:

freebsd# mount /dev/md0p3 /mnt
freebsd# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
(...)
/dev/md0p3     410M    8.0k    377M     0%    /mnt

Installing FreeBSD to the bootable image

Installing the operating system is as easy as this:

[phs@freebsd /usr/src]$ sudo make installkernel KERNCONF=ALIX1C DESTDIR=/mnt
--------------------------------------------------------------
>>> Installing kernel ALIX1C
--------------------------------------------------------------
(...)
[phs@freebsd /usr/src]$ sudo make installworld DESTDIR=/mnt
(...)
[phs@freebsd /usr/src]$ sudo make DESTDIR=/mnt distrib-dirs distribution
(...)

Before the image is really useful, you probably want to adjust a few config files here and there. Especially etc/fstab should be configured so that the root file system is found and mounted when booting:

[phs@freebsd /mnt]$ cat etc/fstab 
# Device        Mountpoint  FSType  Options Dump    Pass
/dev/ada0p3     /           ufs     ro      0       0
/dev/ada0p2     none        swap    sw      0       0

In my case, the Alix 1.C board has a serial console but no screen. In order to get a login prompt on the serial console, etc/ttys needs to contain a line like this one:

ttyu0   "/usr/libexec/getty std.115200" vt100   on  secure

Finally, a root password must be set:

# pw -V /mnt/etc usermod root -h 0

When done, transfer the image to the CF card using dd(1).

Update Jan 24th, 2013: There are a few more files than just /etc/fstab that should be adjusted – added a few words about those.

CMake and C++ “Compile Time” Polymorphism

For a recent project of mine, I wanted to use what some people call “Compile Time Polymorphism” in C++. Here’s how I implemented it.

Polymorphism in the context of programming computers usually refers to the ability to tread objects of a different data type through the same interface. In C++, this is often implemented through class inheritance and the use of virtual functions. The text book example of this concept is two classes, Cat and Dog, that inherit from a common super class Animal. Animal has a method makeSound() that is implemented by each subclass accordingly. In real software projects, polymorphism is used to hide multiple implementations behind a uniform interface. Here’s an example of how this concept is usually used in C++.

class Animal {
public:
 void makeSound(void) = 0;
};

class Cat : public Animal {
public:
 void makeSound(void);
};

class Dog : public Animal {
public:
 void makeSound(void);
};

The issue with this code is that it requires the use of virtual functions which means you need a vtable for the concrete subclasses. Usually, as a programmer, you don’t need to worry about vtables as the compiler takes care of that for you. But let’s take a look at how this works anyways. A vtable is basically a table of function pointers. For each of the concrete classes shown above, the vtable contains a pointer to the respective makeSound method. Also, each object carries a pointer to the vtable. At runtime, when a virtual method of an object is called, the pointer to the vtable is resolved to the actual vtable. From there, the address of the method is loaded and the call to it is made indirectly. So the use of virtual methods not only increases the size of your code, but also the size of your objects. In addition to that, it forces the compiler to use indirect function calls through pointers which are usually slower than direct function calls. Again, the compiler takes care of all of that, so this is purely informational.

All of the above is okay and in fact required if you don’t know the concrete type of an object until the software actually runs. Also, in most software projects, the drawbacks don’t matter and aren’t even noticeable. But there are situations where you may not want to pay the price of virtual methods, e.g. in a resource limited embedded system.

Also, there are situations where it is known at compile time what the concrete implementation of an interface will be. This is true for example when you have an abstraction of an interface that is specific to a certain operating system: When you compile the software, you already know what the target operating system will be, so you can simply use and link to the right implementation of the interface, instead of post-poning the decision to runtime.

So how would you use polymorphism in C++ without the use of virtual methods?

Here’s how you could do it:

typedef enum OperatingSystemImpl {
 Darwin,
 FreeBSD,
 Linux
} OperatingSystemImpl_t;

template struct OperatingSystemChoice;

class DarwinImpl;
class FreeBSDImpl;
class LinuxImpl;

template<> struct OperatingSystemChoice {
 typedef DarwinImpl m_type;
};

template<> struct OperatingSystemChoice {
 typedef FreeBSDImpl m_type;
};

template<> struct OperatingSystemChoice {
 typedef LinuxImpl m_type;
};


struct OperatingSystemService {
 typedef OperatingSystemChoice< ... >::m_type m_type;
};

Of course, the ellipsis must be expanded, but more on that later. What’s important is how software using this construct would use the code:

OperatingSystemService::m_type OsServiceObj;

The snipped above would create an object of the correct type, dependend on what the ellipsis expands to. The neat thing is that the template compiler ensures that the ellipsis is expanded to a valid “type” as defined in enum OperatingSystemImpl. Also, it is made sure that the actual, underlying class is declared, e.g. class DarwinImpl.

In other words: If you tried to compile the software with the ellipsis expanded to Windows, you’d get a compilation error. If you had implemented this using classic polymorphism, you’d probably have some code that dynamically created the right object depending on what input is given. That means, you have to test your compiled code, feeding it an invalid type. This mean you must run your software. I’m convinced that finding problems earlier is better, so finding an issue when code is compiled is better than finding issues when code is run.

So back to how the ellipsis is expanded. Here’s how CMake, a build system, comes into play. CMake uses input files that describe how the software needs to be compiled. Those input files, as with other build systems, are able to define compiler flags. Also, CMake defines a variable that contains the operating system’s name. I suspect it’s the output of uname. So here’s what I added to my top level CMakeList.txt file:

add_definitions("-DOPERATING_SYSTEM=${CMAKE_SYSTEM_NAME}")

This makes the OPERATING_SYSTEM macro known to the pre-processor. So the code with the ellipsis can be re-written like this:

struct OperatingSystemService {
 typedef OperatingSystemChoice::m_type m_type;
};

Et voilà, the right type is picked when the code is compiled.

Here are the nice things about this: There is no need for virtual methods, eliminating the need for vtables. Also, invalid or unsupported operating system types will be found at compile time vs. at runtime while the code for all supported operating systems will still be always compiled (just not used).

One downside may seem that you no longer have an enforced interface like when using purely virtual classes, i.e. the compiler may not tell you that you forgot to implement a newly added method to one of your implementation classes. However, this is more of a minor issue: You will still get a compilation error in that case, but only if you’re compiling for the target system where you forgot to implement the newly added method.

Flashrom, Alix 1.C and FreeBSD

I’m quite surprised that flashrom works pretty much out of the box on FreeBSD running on my Alix 1.C board. All I needed to do was comment out the code in the enable_flash_cs5536() function part of the chipset_enable.c file. Then, this simple command let me read out the BIOS image:

$ flashrom -f -r -c "SST49LF040" bios.bin

SSH Tricks

My shell scripting skills suck. So it comes naturally that I learn a lot every time I need to write a script. The trick I’m about to describe is so neat that I want to share it.

Assume for a second that you need to execute a series of commands on a remote machine, but you can’t pass them to SSH at once. Or consider that you might need to transfer a file over to a remote host and then extract it there. Normally, you’d need to create several SSH connections for this. In the “transfer, then extract” scenario, you need one connection for scp(1) and another one to extract the file on the remote host. Unless you have your public key in the remote host’s ~/.ssh/authorized_keys file, you need to enter your password for the remote host twice. It’s probably not a big deal for most, but it’s annoying at times.

It might be obvious for some, but I recently found out that ssh(1) offers a solution for the problem described above. It allows you to open one connection in "master" mode to which other SSH processes can connect through a socket. The options for the "master" connection are:

$ ssh -M -S /path/to/socket user@rhost

Alternatively, the ControlPath and ControlMaster options can be set in the appropriate SSH configuration files. Other SSH processes that want to connect to the "master" connection only need to use the -S option. The authentication of the "master" connection will be re-used for all other connections that go through the socket. I’m not sure if SSH even opens a separate TCP connection.

Going further, this can be used in scripts to save the user a lot of password typing, especially if the execution flow switches between local and remote commands a lot. At the beginning of the script, simply create a &qout;master" connection like this:

$ ssh -M -S /path/to/socket -f user@rhost 
    "while true ; do sleep 3600 ; done"

It should be noted that the path to the socket can be made unique by using a combination of the placeholders %l, %h, %p, %r for the local host, remote host, port and remote username, respectively. The -f parameter makes SSH go into the background just before command execution. However, it requires that a command is specified, hence an endless loop of sleep(1) calls is added to the command line. Other SSH connections can be opened like this, not requiring any further user input (for authentication):

$ ssh -S /path/to/socket user@rhost

This leaves the problem of how the "master" process can be terminated when the script has finished. Some versions of SSH offer the -O parameter which can be used to check if the "master" is still running and, more importantly, to tell the "master" to exit. Note that in addition to the socket, the remote user and host need to be specified.

$ ssh -S /path/to/socket -O check user@rhost
$ ssh -S /path/to/socket -O exit user@rhost

However, there are still two problems to be solved. First, when the "master" quits, the dummy sleep loop continues to run. And second, how can the "master" be terminated if the given SSH version doesn’t offer the -O parameter (short of killing the process by its PID)?

An Encrypted File-backed File System on FreeBSD

The following is a compilation of information, largely based on the FreeBSD Handbook, Section 18.13 and Section 18.16. This post describes how a file-backed, encrypted file system can be built and used on FreeBSD.

Prerequisites

In order to follow the steps below, the following prerequisites must be met:

  • md(4) in the Kernel
  • gbde(4) in the Kernel, i.e. kldload geom_bde
  • The /etc/gbde directory must exist

First time installation

After those requirements are fullfilled, it’s time to take the first step which is creating a file that will serve as the basis for the file system. There is no support for growing images so you need to allocate all space now. This command creates a 128 MByte file filled with zeros:

$ dd if=/dev/zero of=encrypted.img bs=4k count=32768

Next, create a memory disk which is based on the the image file created above. As root, do:

# mdconfig -a -t vnode -f encrypted.img -u <unit>

In the example above, the parametr -u <unit> is optional and specifies a number which determines the number of the md(4) device. For example, if you use 4, then md4 will be created.

Now create a partition table which, e.g. one with an automatic layout:

# bsdlabel -w md<unit> auto

At this point, you have the equivalent of a hard disk w/ one or more FreeBSD partitions on it. Note that there is no filesystem, yet. In order to create an encrypted file system, an initialization step must be performed:

# gbde init /dev/md0c -i -L /etc/gbde/encrypted.lock

The initialization step opens an editor where the user is asked to enter a few parameters. Most notably it is probably sensible to change the sector_size to 4096, i.e. the page size on i386. When the editor is closed, the gbde(8) program asks for a password. This password will be used to encrypt the disk, so do not lose it. Note that the /dev/md0c parameter corresponds to the md(4) device which was previously created. The file of the lock name can be arbitrarily named as long as its ending is .lock. Also note that the lock file must be backed up as the file system cannot be easily accessed without the file.

Now the encrypted device can be attached by running

# gbde attach /dev/md0c -l /etc/gbde/encrypted.lock

You’ll be prompted for the password set in the previous step. If the password is accepted, you’ll end up with a new disk device at /dev/md0c.bde on which you can operate the same way as on a regular disk. That means you’ll need to create a file system, first.

# newfs -U -O2 /dev/md0c.bde

Make sure you use the .bde device node and not the raw memory disk as you’d end up without encryption. When you’re done, it’s time to mount the file system:

# mkdir /encrypted
# mount /dev/md0c.bde /encrypted

Unmounting the encrypted file system

Unmounting the file system is easy, but the gbde(4) device needs to be detached before the md(4) device can be destroyed.

# umount /encrypted
# gbde detach /dev/md0c
# mdconfig -d -u 0

Re-mounting an encrypted file system

Re-mounting is simple, but note that the FreeBSD handbook suggests that the file system be checked for errors before mounting:

# mdconfig -a -t vnode -f encrypted.img
md0
# gbde attach /dev/md0c -l /etc/gbde/encrypted.lock
# fsck -p -t ffs /dev/md0c.bde
# mount /dev/md0c.bde encrypted

Cross compiling the FreeBSD Kernel

Wow, two post in one day already 😉

There are two things I’d like to note. First, I noticed that cross-compiling seems a major issue for me. Don’t know why that is.

Second, I need to remember Warner Losh’s post on cross compiling FreeBSD. Essentialy, the procedure is:

$ export TARGET=powerpc
$ export TARGET_ARCH=powerpc
$ make kernel-toolchain
$ make buildkernel

Addendum: Unfortunately, this procedure works only on architectures already supported by FreeBSD and its build system. Therefore, it doesn’t work for me. So here’s the full story on how I got FreeBSD to at least compile.

Building the Toolchain

Building the toolchain is pretty straight forward. I’ve already written about how to build a cross compiler. On FreeBSD however, four things are different.

  • The target is powerpc64-unknown-freebsd. I don’t know if powerpc64-unknown-elf would have worked, too.
  • The target is new, so a patch to the binutils sources is required.
  • The GNU MP Bignum Library (GMP) is required. I used version GMP 4.2.1 and installed it in $PREFIX
  • Finally, the MPFT Library needs to be built. I used version MPFT 2.3.0 and installed it in $PREFIX

Note that those steps have to be performed before the compiler is built. Since I didn’t install the libraries in the standard locations, the LD_LIBRARY_PATH variable needs to be set before the compiler can be used:

$ export LD_LIBRARY_PATH=$PREFIX/lib

Building the Kernel

The basic procedure of building a kernel is outlined in the FreeBSD Developer’s Handbook. Provided that the cross compiler has been installed in $PREFIX, these steps are required:

$ export MACHINE=powerpc64
$ export MACHINE_ARCH=powerpc64
$ export CC=${PREFIX}/${TARGET_PREFIX}-gcc
$ export NM=${PREFIX}/${TARGET_PREFIX}-nm
$ export LD=${PREFIX}/${TARGET_PREFIX}-ld
$ export SYSTEM_LD=${LD}
$ export OBJCOPY=${PREFIX}/${TARGET_PREFIX}-objcopy
$ cd /usr/src/sys/powerpc64/conf
$ config KERNCONF
$ cd ../compile/KERNCONF
$ make cleandepend && make depend
$ make kernel

Oh, of course this doesn’t work with the stock FreeBSD sources. Instead, my FreeBSD 64-Bit PowerPC Mercurial repository needs to be used. Note that for convenience reasons, that tree includes a crossenv script which, when sourced, sets up the required environment variables.

Qemu, FreeBSD and coreboot

Since my attempts at getting Qemu running on Mac OS X were unsuccessfull, I’ve decided to go a different route. I’m now trying to build it on FreeBSD again.

Some time ago, I took some notes on how to get Qemu running on FreeBSD and added them to the coreboot wiki. Then some time later, I tried to build Qemu per those instructions, but had to discover that the port had been updated to a newer version of Qemu and no longer works.

So I’ve decided so maintain my own copy of the Qemu sources. The goal is to have a working version of Qemu which can be built on FreeBSD and can run coreboot. The repository is at svn+ssh://phs.subgra.de/usr/svnroot/qemu, a web frontend is available at https://phs.subgra.de/svnweb/index.cgi/qemu. Since the repository is not (yet?) public, here is a tar-ball of the latest version.

Building Qemu for FreeBSD from “my” sources is pretty straight forward. However, it’s not as straight forward as building from Linux or from a FreeBSD port, so here are the full instructions 😉

$ export BSD_MAKE=`which make`
$ ./configure (your options here)
$ gmake
$ gmake install

Have fun.

TianoCore on FreeBSD/amd64, Take 2

Now that I have build a cross compiler, I’m trying to build the TianoCore EDK again.

In this post I list a few environment variables that need to be set in order to build the EDK. Because the build process needs to use the cross compiler, another environment variable is needed:

$ export CC=/opt/i386-tiano-pe/bin/gcc

Since I’m lazy and all, I put everything in a little script called env.sh:

export WORKSPACE=/home/phs/edk2
export JAVA_HOME=/usr/local/diablo-jdk1.5.0
export ANT_HOME=/opt/apache-ant-1.6.5
export XMLBEANS_HOME=/opt/xmlbeans-2.1.0
export PATH=$PATH:$ANT_HOME/bin:$XMLBEANS_HOME/bin
export CC=/opt/i386-tiano-pe/bin/gcc

Also, the EDK build notes mention that the default build target is the Windows NT Emulation environment. However, that target cannot be built using GCC, so I needed to edit the file Tools/Conf/target.txt and change the ACTIVE_PLATFORM to:

ACTIVE_PLATFORM       = EdkUnixPkg/Unix.fpd

Now, I can tip of the build process as follows:

$ cd ~/edk2
$ . env.sh
$ . edksetup.sh newbuild

Note that the build script must be “sourced”, not executed. Unfortunately, the build process still fails, but now with a different error:

       [cc] 1 total files to be compiled.
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:3,
       [cc]                  from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:27:20: error: jni_md.h: No such file or directory
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:3,
       [cc]                  from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:45: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jsize'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:104: error: expected specifier-qualifier-list before 'jbyte'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:193: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1834: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1842: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1851: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1888: error: expected specifier-qualifier-list before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1927: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1930: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1933: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1937: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint'
       [cc] /usr/local/diablo-jdk1.5.0/include/jni.h:1940: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void'
       [cc] In file included from /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:17:
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.h:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jbyteArray'
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:29: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jbyteArray'

BUILD FAILED
/home/phs/edk2/Tools/build.xml:22: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/build.xml:254: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/CompressDll/build.xml:45: gcc failed with return code 1

The root cause of this is that the compiler can’t find the jni_md.h header which is located in $JAVA_HOME/include/freebsd (at least on my system). I worked around this problem by editing $JAVA_HOME/include/jni.h as follows:

--- jni.h.orig  2008-02-06 11:50:05.000000000 +0100
+++ jni.h       2008-02-06 11:50:16.000000000 +0100
@@ -24,7 +24,7 @@
 /* jni_md.h contains the machine-dependent typedefs for jbyte, jint
    and jlong */
 
-#include "jni_md.h"
+#include "freebsd/jni_md.h"
 
 #ifdef __cplusplus
 extern "C" {

Now I’m stuck with yet another compilation error:

init:
     [echo] Building the EDK Tool Library: CompressDll

Lib:
       [cc] 1 total files to be compiled.
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c: In function 'Java_org_tianocore_framework_tasks_Compress_CallCompress':
       [cc] /home/phs/edk2/Tools/CCode/Source/CompressDll/CompressDll.c:57: warning: overflow in implicit constant conversion
       [cc] Starting link
       [cc] /usr/bin/ld: /home/phs/edk2/Tools/CCode/Source/Library/libCommonTools.a(EfiCompress.o): relocation R_X86_64_32S can not be used when making a shared object; recompile with -fPIC
       [cc] /home/phs/edk2/Tools/CCode/Source/Library/libCommonTools.a: could not read symbols: Bad value

BUILD FAILED
/home/phs/edk2/Tools/build.xml:22: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/build.xml:254: The following error occurred while executing this line:
/home/phs/edk2/Tools/CCode/Source/CompressDll/build.xml:45: gcc failed with return code 1

Well, I guess we’ll see where this ends…

Building a Cross Compiler on FreeBSD

I’m currently trying to build a cross compiler (and other required tools) on FreeBSD. The compiler will run on FreeBSD/amd64 and should produce i386 binaries. This wouldn’t be too hard since that task can easily be accomplished by using the FreeBSD source tree. However, I need the toolchain to produce binaries in the PE/COFF format instead of the default ELF format.

Building the toolchain is somewhat tricky, at least I found it to be poorly documented. But maybe I just looked in the wrong places. Building the toolchain requires:

  • Building binutils.
  • Locating some header files.
  • Building the compiler.
  • Building a C Library.
  • Building the compiler again so it uses the C library that was just built.

For my needs, I found that the last two steps weren’t needed. I wrote a script that downloads the sources, extracts the archives and builds the toolchain. Here’s the script in full quote (I really wish there was a way to upload files to this thing):

#/bin/sh

#
# Copyright (c) 2008 Philip Schulz 
#

#
# This script downloads, builds and installs GCC and GNU Binutils that can
# produce x86 binaries in PE/COFF Format. The cross toolchain needs some headers
# that aren't usually present on the host system. However, those headers can be
# obtained from the cygwin sources, that's why a snapshot of the cygwin sources
# is downloaded.
#
# After the script finishes, the tools will be located at
# ${PREFIX}/${TARGET_ARCH}/bin. Some other binaries will be installed in
# ${PREFIX}/bin with their own prefix of ${TARGET_ARCH} but I don't know that
# they are for.
#

# Prefix where the Cross-Tools will live
PREFIX="${PREFIX:-/opt}"

# Target architecture.
TARGET_CPU="${TARGET_CPU:-i386}"
TARGET_ARCH=${TARGET_CPU}-tiano-pe

# Program that can fetch the files.
FETCH_COMMAND="/usr/bin/ftp -V"

# GNU Make
GNU_MAKE=`which gmake`

################################################################################
#
# GCC settings.
#
################################################################################
# What version of GCC will be fetched, built and installed 
GCC_VERSION=gcc-4.2.3
# What mirror to use.
GCC_MIRROR=ftp://ftp-stud.fht-esslingen.de/pub/Mirrors/ftp.gnu.org
# File name of the GCC sources. Should probably not be changed.
GCC_ARCHIVE=$GCC_VERSION.tar.bz2
# Where the GCC Sources can be fetched from. Should probably not be changed.
GCC_URL=$GCC_MIRROR/gcc/$GCC_VERSION/$GCC_ARCHIVE
# Arguments for the GCC configure script. Should probably not be changed.
GCC_CONFIGURE_ARGS="--prefix=${PREFIX} --target=${TARGET_ARCH} "
GCC_CONFIGURE_ARGS+="--with-gnu-as --with-gnu-ld --with-newlib "
GCC_CONFIGURE_ARGS+="--disable-libssp --disable-nls --enable-languages=c "
GCC_CONFIGURE_ARGS+="--program-prefix=${TARGET_ARCH}- "
GCC_CONFIGURE_ARGS+="--program-suffix=-4.2.3 "


################################################################################
#
# Binutils settings.
#
################################################################################
# What version of the GNU binutils will be fetched, build and installed
BINUTILS_VERSION=binutils-2.18
# What mirror to use.
BINUTILS_MIRROR=ftp://ftp-stud.fht-esslingen.de/pub/Mirrors/ftp.gnu.org
# File name of the binutils sources. Should probably not be changed.
BINUTILS_ARCHIVE=$BINUTILS_VERSION.tar.gz
# Where the GCC Sources can be fetched from. Should probably not be changed.
BINUTILS_URL=$BINUTILS_MIRROR/binutils/$BINUTILS_ARCHIVE
# Arguments for the GCC configure script. Should probably not be changed.
BINUTILS_CONFIGURE_ARGS="--prefix=${PREFIX} --target=${TARGET_ARCH} "
BINUTILS_CONFIGURE_ARGS+="--disable-nls "

################################################################################
#
# Cygwin settings.
#
################################################################################
CYGWIN_SNAPSHOT=20080129
CYGWIN_ARCHIVE=cygwin-src-${CYGWIN_SNAPSHOT}.tar.bz2
CYGWIN_MIRROR=http://cygwin.com/
CYGWIN_URL=${CYGWIN_MIRROR}snapshots/${CYGWIN_ARCHIVE}
CYGWIN_DIR=cygwin-snapshot-${CYGWIN_SNAPSHOT}-1

################################################################################
#
# Batch code.
#
################################################################################
#
# Fetches the files.
#
do_fetch() {
        if [ ! ( -f $GCC_ARCHIVE ) ] ; then
                echo "Fetching ${GCC_URL}"
                ${FETCH_COMMAND} ${GCC_URL}
        else
                echo $GCC_ARCHIVE already locally present.
        fi

        if [ ! ( -f $CYGWIN_ARCHIVE ) ] ; then
                echo "Fetching ${CYGWIN_URL}"
                ${FETCH_COMMAND} ${CYGWIN_URL}
        else
                echo $CYGWIN_ARCHIVE already locally present.
        fi

        if [ ! ( -f $BINUTILS_ARCHIVE ) ] ; then
                echo "Fetching ${BINUTILS_URL}"
                ${FETCH_COMMAND} ${BINUTILS_URL}
        else
                echo $BINUTILS_ARCHIVE already locally present.
        fi
}

#
# Extracts the archives.
#
do_extract() {
        # Remove already extracted files first.
        rm -rf ${GCC_VERSION}
        rm -rf ${CYGWIN_DIR}
        rm -rf ${BINUTILS_VERSION}

        # Extract the archives
        if [ -f $GCC_ARCHIVE ] ; then
                echo "Extracting ${GCC_ARCHIVE}"
                tar -jxf ${GCC_ARCHIVE}
        fi

        if [ -f $CYGWIN_ARCHIVE ] ; then
                echo "Extracting ${CYGWIN_ARCHIVE}"
                tar -jxf ${CYGWIN_ARCHIVE}
        fi

        if [ -f $BINUTILS_ARCHIVE ] ; then
                echo "Extracting ${BINUTILS_ARCHIVE}"
                tar -xzf ${BINUTILS_ARCHIVE}
        fi
}


BUILD_DIR_PREFIX=build-

#
# Builds Binutils.
#
do_binutils_build() {
        BUILD_DIR_BINUTILS=${BUILD_DIR_PREFIX}binutils-${TARGET_ARCH}

        # Remove dir if it exists.
        if [ -d $BUILD_DIR_BINUTILS ] ; then
                rm -rf $BUILD_DIR_BINUTILS
        fi

        echo "Building binutils..."

        # Changing directory, so use a sub-shell (?)
        (
                # Create a the build directory.
                mkdir ${BUILD_DIR_BINUTILS} && cd ${BUILD_DIR_BINUTILS};
                # Configure, build and install binutils
                ../${BINUTILS_VERSION}/configure ${BINUTILS_CONFIGURE_ARGS} &&
                ${GNU_MAKE} -j 12 -w all && ${GNU_MAKE} -w install
        )

        # Remove build dir
        rm -rf $BUILD_DIR_BINUTILS

        echo "Binutils Build done."
}

#
# "Builds" cygwin. Actually, it only copies some headers around.
#
do_cygwin_build() {
        HEADERS=${PREFIX}/${TARGET_ARCH}/sys-include

        mkdir -p $HEADERS  &&
        cp -rf ${CYGWIN_DIR}/newlib/libc/include/* $HEADERS &&
        cp -rf ${CYGWIN_DIR}/winsup/cygwin/include/* $HEADERS
}

#
# Builds GCC
#
do_gcc_build() {
        BUILD_DIR_GCC=${BUILD_DIR_PREFIX}gcc-${TARGET_ARCH}

        # Remove dir if it exists.
        if [ -d $BUILD_DIR_GCC ] ; then
                rm -rf $BUILD_DIR_GCC
        fi

        echo "Building GCC..."

        # Changing directory, so use a sub-shell (?)
        (
                # Create a the build directory.
                mkdir ${BUILD_DIR_GCC} && cd ${BUILD_DIR_GCC};
                # Configure, build and install GCC.
                ../${GCC_VERSION}/configure $GCC_CONFIGURE_ARGS &&
                ${GNU_MAKE} -j 12 -w all && ${GNU_MAKE} -w install
        )
        rm -rf $BUILD_DIR_BINUTILS

        echo "GCC Build done."
}

do_fetch
do_extract
do_binutils_build
do_cygwin_build
do_gcc_build

Unfortunately, the gcc binary built by the script, located in /opt/i386-tiano-pe/bin, can’t produce binaries. Invoking the compiler on a source file (“Hello, World!” program) dies with:

$ /opt/i386-tiano-pe/bin/gcc main.c -o main
/opt/lib/gcc/i386-tiano-pe/4.2.3/../../../../i386-tiano-pe/bin/ld: crt0.o: No such file: No such file or directory
collect2: ld returned 1 exit status

I assume this is because I skipped the last two steps in the list at the beginning of this post. However, using the compiler to generate an assembler file (parameter -S) and then running the assembler on that file to produce an object file does indeed produce a PE/COFF object file.

$ cd /opt/i386-tiano-pe/bin
$ ./gcc -S ~/main.c
$ ./as -o main.o main.s
$ file ./main.o
./main.o: MS Windows COFF Intel 80386 object file