I just spent a good day debugging a problem that eventually turned out to be (most likely) caused by some Linux security extensions deployed on the machine I test my code on.
The code loads an ELF image at runtime and then transfers control to it. Previously, I have worked with 32-Bit PowerPC executables that I ran on a 64-Bit PowerPC host. I recently changed this so that my code (as well as the ELF images it loads) would be a 64-Bit PowerPC executables.
In order to obtain memory where the ELF image could be loaded into, I previously used the malloc(3) call. I didn’t want to use mmap(2) since I was going to port the code to an environment where mmap(2) would not be available. That worked fine in the 32-Bit case.
Anyways, it turns out that, in the 64-Bit case, trying to execute code in a malloc(3)-ed buffer instantly results in a segmentation fault. Using a mmap(2)-ed buffer (with the PROT_EXEC flag) fixes the issue.
I would still like to know why there is a difference between the 32-Bit and the 64-Bit case.