The infamous memory hole

Ok, so I’ve always suspected it, i.e. had a theory, but the CPC945 manual (section 7.2) confirms it.

If a machine has 4 GBytes of memory installed, and say, 1 GByte of I/O Memory is mapped at 0x80000000 (2 GBytes) upwards, then the physical memory will still be fully accessible. It will respond to read requests in the region 0x0 thru 0x7fffffff (i.e. 0 thru 2 GBytes – 1) and to the region 0xC0000000 thru 0x140000000 (i.e. 3 GBytes thru 5 GBytes).

This will of course only work if the CPU can make requests in that range, i.e. has a large enough address bus. Hence there is an actual hole that shadows physical memory when installing 4 GBytes in a x86-based System.

Dual Monitor on T60 (Internal + DVI)

I think I’ve found a way to make my T60p use the internal display and also drive an external monitor via the DVI port (on Linux). For some reason, this does not work automatically upon reboot, but has to be done from the command line.

aticonfig --dtop=horizontal --screen-layout=left --enable-monitor=lvds,tmds1

Now restart the X server and you should see video output on both monitors. I have the external monitor left of the laptop, so I need to run this command as well:

aticonfig --swap-monitor

Then, both monitors work. Unfortunately, I seem to have broken suspend/resume somewhere along the way. It seems that a combination of the things listed below make suspend/resume work again. I don’t know if both are required or if either helps.

  • Update the fglrx driver. I’m using the kernel-module-ATI-upstream-fglrx (carries version numbers 8.452.1 and 2.6.18_53.1) as well as the ATI-upstream-fglrx package (version number 8.452.1-3.oc2) from the repository the IBM Open Client uses by default.
  • Disable the AIGLX extension

On some revision control systems

So I’ve long wondered about the advantages of those shiny, modern (distributed) revision control systems that have seemingly become quite fashionable these days. I started with CVS and liked it, but once I moved to Subversion I have started to feel a dissatisfaction with my version control system, the kind that would not go away if I went back to CVS. It’s like driving a car. Once you drive a faster car, you realize the faster car is not fast enough. Obviously, going back to the original car is a step in the wrong direction.

CVS

The first revision control system I used was CVS. I liked it when I got used to it. It let me keep track of my files, was easy to back up and was fast enough for my little projects. There were good workarounds for CVS’ limitations such as “renaming” files by performing an explicit add and then remove operation or by doing a “repo-copy” (i.e. copying the files in the repository itself). Empty directories could be pruned on updates. What else would one want?

Subversion

Well, I have long wondered why I should be using Subversion instead of CVS. After all, CVS has worked well for me in the past and simply "because it can rename files" hasn’t been too convincing. In fact, I have heard that argument so often and from so many people, it started to become a reason not to switch to Subversion. Well, then I gave Subversion a shot and I have to say, I like it – with some limitations.

But let me first say what I think is good about Subversion. I like Subversion’s speed advantage over CVS. The reason I started using Subversion was that I wanted a way to track my own changes to a rather large source tree that is kept in CVS. I wanted to make periodic imports of snapshots and merge my local changes – similar to how vendor branches work in CVS. When trying to accomplish this with CVS, it became apparent that it would be very time consuming: An import and merge session could take several hours. Doing the same thing with Subversion accellerated the process quite a bit – an update session would still take about an hour, but mostly because I had to download the updated source tree from the net.

Ok, that’s about it when it comes to subversion’s advantages. What I don’t like is subversion’s poor handling of branches. I don’t think a branch is another directory, I think a branch is a branch. The same holds true for a tag. Also, merging branches is a major pain – while simple at first, it will get to the point where keeping track of what has been merged and what needs merging is a complex task . Granted, CVS isn’t a whole lot better at that.

To set things straight: I’m not saying Subversion is bad. All I’m saying is that it isn’t a lot better than CVS for my purposes.

Mercurial

So now on to the reason I started this post. I realize, it has become a lot longer than anticipated, but who cares?! I’ve read about the advantages that distributed revision controls offer some time ago. Having all the history available in with every working copy is one of them. The ability to keep track of merges between branches is another one. It’s the latter that got me interested in Mercurial. While I realize that upcoming versions of Subversion will support merge tracking as well, the version(s) installed on my various computers don’t support it – and I don’t want to compile some development branch of a critical (to me) piece of software.

So I looked at other choices, e.g. git and Mercurial. To be honest, I haven’t looked at git because I heard it is supposed to be hard to use with more than 100 commands. So I started to look at Mercurial and played around with it. I like it, so I don’t think I’ll look at git anytime soon.

Mercurial has (almost) everythig I need: It’s fast, it’s easy to use and it handles merges between branches well. I’m the "almost&quot can be erased as soon as I dig further. What’s still missing is an easy setup for a permanent, central "master" repository (short of the "hg serve" command which starts a minimal HTTP server). I’m also missing a web frontent similar to CVSWeb and SVNWeb – I’m sure such thing does exist, but I haven’t found it yet. The third thing I haven’t figured out yet is how to do backups.

I’d like to write a bit about the differences between Mercurial and the other systems I’ve used. First, a revision of a file has up to two parents. Actually, it always has two parents, but one may be the null parent, and that doesn’t count. You start out with a file that has two null parents. Once you commit a change to the file, the file’s parent becomes the previous revision, and so on. If a revision has only one parent, and no other revision uses it as it’s parent, then that revision is called a head.

The second parent comes in the play, when you have multiple branches. Creating a branch is also different from other systems. You first switch a working copy to a new branch by issuing hg branch <name<. The default branch, i.e. what’s called "HEAD" in CVS and "trunk" in Subversion is called "default" in Mercurial. The default branch always exists. Switching a working copy to a new branch does not create the branch, yet. Only commiting a first change in that working copy does. Note that the first revision’s parent will be the branchpoint revision.

So what happens when you merge between branches? When you merge and commit, the current branch will contain all changes of the original branch since the last merge (or the branchpoint). You don’t need to remember which changes you need to merge into the current branch – Mercurial does that automatically for you. This is possible because when merging, a file’s second parent is filled in with the head revision of the originating branch. That also means, that when you merge changes from branch A into branch B, the head revision of branch A is no longer a head. Don’t worry, though, once you commit something on branch A again, it will have a head again.

Now on to the distributed part. The concept of branches is taken further in a distributed revision control system. Multiple instances of the repository can have dependencies between each other. A new copy of a repository is created by running the "hg clone" command. Then, changes to that repository copy can be made as if the original never existed. But, if you want to, you can also pull any changes the original repository has incorporated. Do this by running "hg pull" – it’s really just merging all changes from your master repository into your copy. It also works the other way around: You can push your changes upstream by running "hg push" (if the upstream repository allows it).

All in all, I find Mercurial very easy to use once the basic concepts have been understood. I’m not sure yet whether I’ll convert any of my Subversion repositories to Mercurial or if I’ll use seriously at all. But for future reference, here’s a link to a free book on Mercurial.

Why Open Firmware is pretty neat

I’ve just been impressed by the power of Open Firmware again. I’m currently tinkering with the decrementer and time base registers on a PowerPC processor and I need to find out if some of my assumptions are correct.

One way to do that is to compile my C code, load and start it over the network and see if things work the way I think they should work. While this works, it’s somewhat time consuming.

Another way of doing this is to use the Open Firmware user interface – a full featured Forth system. As such, it offers very powerful features during development. In fact, everything entered at the client interface could also be compiled into a forth word, which could even be included in the firmware’s dictionary.

So let’s take a look at the base conversion features Forth offers.

0> decimal
OK
0> 63 19 - dup .
44 OK
1> hex .
2c OK

The code above switches the default number base to decimal. Then the (decimal) numbers 63 and 19 are placed on the stack and a subtraction (63 – 19) is performed. What ends up on the stack is the result of the math operation. We duplicate the item (basically saving a copy for later use) and then pop and display the top value. The result is 44, i.e. the result of the subtraction when calculating with decimal numbers.

Now we’re switching the number base to hexadecimal again, and display the stack’s topmost value (we saved the calculation result before). The result is 2c, i.e. 44 displayed as a hexadecimal number.

Next up, logical operations. A left shift is defined as

lshift (value count -- value)

meaning you place value on the stack, place the amount of bits you want it to be shifted (count) on the stack and when the lshift word returns, the shifted value will be on the stack. So take a look at this:

o> decimal 63 19 - hex
OK
1> 1 swap lshift
OK
1> dup .
100000000000  OK

The first line is the subtraction explained above. Then, we push a 1 on the stack and swap the two top most items. The stack now looks like ( 1 2c ) which we’ll feed the lshift operator. We duplicate the result and display one copy. And there’s bitmask, where the 44th bit is set.

Going further to the more firmware specific parts. The Open Firmware implementation I’m using right now offers a word that let’s me read the boot processor’s HID0 register. The word is hid0@, it takes no input and places the register’s value on the stack. Similarily, there’s a word that let’s me write the register, it’s hid0!. It takes one argument from the stack and doesn’t place anything on the stack.

So take the next sequence. I’m assuming it’s executed right after the previously quoted sequence, so the calculated bitmask should be on the stack.

1>hid0@
OK
2> dup .
100080000000 OK
2> or dup .
100080000000 OK
1>hid0!
OK
0>

1>

First, we read the HID0’s value and display it in a non-destructive manner. Then we or the bitmask and the register value and display it’s result. Note that the result is the same, meaning the 44th bit was already set. Then, we write the value back to the register.

This is just an example of the power of Open Firmware. I’m going to play some other tricks right now, but I wanted this to be written down first so I can look it up again.

The TianoCore Contributor’s Agreement

So, I finally found some time to crawl through the TianoCore project’s Contributor’s Agreement. Here’s what I think it means.

  • Preample: So Intel has decided to release some code under what it calls the "BSD license". Personally, I have think the BSD license is something else or maybe even something like this. I don’t think a link to an incomplete license stub is enough, though. But enough of the ranting.

    Just to be clear here, I think it is safe to assume that Intel released their code under the following license (note that it’s just the stub they provide a link to, filled in with some meaningful values):

    Copyright (c) 2006-2008, Intel Corp.
    All rights reserved.
    
    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are
    met:
    
    * Redistributions of source code must retain the above copyright notice, this
      list of conditions and the following disclaimer.
    
    * Redistributions in binary form must reproduce the above copyright notice,
      this list of conditions and the following disclaimer in the documentation
      and/or other materials provided with the distribution.
    
    * Neither the name of Intel Corp. nor the names of its contributors may be
      used to endorse or promote products derived from this software without
      specific prior written permission.
    
    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
    AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
    IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
    ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
    LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
    CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
    SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
    CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGE.
    

    In addition to their own code which they release under the "BSD license", there is some code in the TianoCore tree that is released under other licenses. Specifically the FAT32 code, which is apparently covered by some patents. If other licenses apply, and that’s the key point here, the license is either in the source files themselves or packaged with the source files.

  • Definitions: I’m a "contributor" if I hold the copyright on some code that I contribute to the TianoCore project. If I represent a company, all my employees are considered part of my company and not separate contributors. A "contribution" is anything I sent to the TianoCore project, e.g. via mail, snail mail, telephone, etc. as long as I don’t explicitly mark it as "not a contribution".
  • License for Contributions: So when I decide to contribute something to the TianoCore project, I automatically agree to provide it under a license. That can be found in the contributor’s agreement. The bullet points a) and b) are pretty clear: The permisssion to use and redistribute my contributions, provided that the three conditions laid out in the "BSD license" quoted above are met.

    The next bullet point, c), is somewhat harder to understand. I interpret is as: If I hold a patent on something, and my contribution infringes that patent, I automatically grant a patent license. I grant it to everybody, who wants to exercise his rights I granted him with the copyrigth license mentioned above. However, here’s the catch: That patent license applies only to my unmodified contribution.

    I’m not sure what to think about that. I think, Intel is trying to protect their own patents. So if they release some code to the TianoCore project which is covered by a patent they own, the only grant a minimum patent license. What remains unclear is whether the patent license is still valid, even if I modify their code as permitted by the copyright license they granted.

    The last bullet point, d), is an easy one again. It’s simply the "provided ‘as is’" part in the copyright license cited above.

  • Representations: By signing the agreement, I promise that I created the code myself. If my employer does have any rights, I promise that it explicitly permitted me to contribute my code.
  • Third Party Contributions: If I chose to contribute third party code, I need to explicitly mark it as such. It also must be separate from my own contributions.
  • Miscellaneous. The agreement is in English and translation are not authorative. If I want to take the whole thing to court, I need to to it in Delaware (US).

So what’s the conclusion here? I think Intel is pretty open about releasing their code. However, they are not so open about creating an open source project around their code. What I mean is that there are quite some legal hurdles one has to pass when contributing code to the TianoCore project. In effect, they force the BSD license on any code I contribute and I think that’s OK. On the other hand, however, they prevent me from forking the project by introducing that stupid patent clause since I have no easy way of checking whether a specific piece of code infringes one of their patents.

I really wonder if they only want to protect themselves from getting sued over some code contributed the project from non-Intel employees. Or are they really trying to create an impression of an Open Source project when it’s really not?

What to do when Parallels brings the System to a halt

I don’t know when this started, but recently whenever I try to start Parallels (which admittedly doesn’t happen very often), my whole System comes down to a halt. Well, not completely, the system is still running, but it won’t even let me switch between applications in a responsive manner. Even the mouse movement isn’t smooth anymore.

Note that this is with Parallels 2 on Mac OS X 10.5. The system is a first generation Mac Pro w/ two 2.66 GHz Core 2 CPUs and 3 GBytes of RAM. So the system could use a little more RAM, but apart from that it shouldn’t have any issues running Parallels. And in fact, things used to work just fine.

Anyways, here’s a workaround I’ve discovered today:

$ sudo /Library/StartupItems/Parallels/Parallels stop
$ sudo /Library/StartupItems/Parallels/Parallels start

Porting TianoCore to a new platform, part 2

So it took me exactly one day to start the TianoCore DXE Core on a new platform. Of course, this doesn’t count the roughly 10 weeks it took me to understand how the TianoCore Codebase works 😉 Also, it took me a fair amount of work to fix one thing or the other.

Anyways, I wanted to take a note that the generic memory test module included in the TianoCore Codebase is nowhere near "quick", despite the fact that it has a mode called QuickMode, when you attempt to throw it at a 8 GByte memory range.

Reminder about porting TianoCore to a new Platform

Just a quick note to remind myself that when porting the TianoCore stack to a new platform, the PEI needs an implementation of the PEI_LOAD_FILE_PPI before it can load any PEIMs from the FV.

For the platforms I have worked with so far, the PPI was implemented in the SEC.

The beauty of security extensions

I just spent a good day debugging a problem that eventually turned out to be (most likely) caused by some Linux security extensions deployed on the machine I test my code on.

The code loads an ELF image at runtime and then transfers control to it. Previously, I have worked with 32-Bit PowerPC executables that I ran on a 64-Bit PowerPC host. I recently changed this so that my code (as well as the ELF images it loads) would be a 64-Bit PowerPC executables.

In order to obtain memory where the ELF image could be loaded into, I previously used the malloc(3) call. I didn’t want to use mmap(2) since I was going to port the code to an environment where mmap(2) would not be available. That worked fine in the 32-Bit case.

Anyways, it turns out that, in the 64-Bit case, trying to execute code in a malloc(3)-ed buffer instantly results in a segmentation fault. Using a mmap(2)-ed buffer (with the PROT_EXEC flag) fixes the issue.

I would still like to know why there is a difference between the 32-Bit and the 64-Bit case.

Building e2fsprogs shared libraries

I found myself in need to build the e2fsprogs package (again) and found out that it doesn’t build shared libraries by default. However, I need shared libraries, so this is what it takes to build the package:

$ ./configure --prefix=/opt --enable-elf-shlibs
$ make
$ make install
$ make install-libraries