Heard about the new C bandwagon? According to April's Tiobe report C has eclipsed Java as the number 1 most popular language by a whopping 0.007%. So while all the rest of you Java suckers have just lost all professional relevance, I'm OK. I cut my professional teeth on C.
Sometime ago on Twitter I gave in-jest advice to new C programmers: all your code and data are in one giant mutable array indexed by pointers, good luck. David MacIver (http://twitter.com/drmaciver) responded that the C standard didn't say that. Whereupon I swore a mighty blood oath of vengeance upon him for ruining my joke with facts.
Vengeance is hard so I'll get started later, maybe tomorrow or next week. Right now I'll write a bit about the facts behind pointers. One of the trickiest aspects of writing portable C programs is that a great deal of the specification says that the behavior of certain constructs is undefined. Old C hands sometimes point out some undefined behavior in a C newbie's program whereupon the newbie fires up gcc or msvc or whatever and shows that the code behaves exactly as it "should." Who's right? What does it mean to be undefined?
C was designed to be both 1) portable across a wide range of platforms and 2) work at a very low level. Those two goals are pretty much in opposition - low level details can vary widely across platforms. Don't think MacIntel vs Wintel, think IBM System/360 vs DEC PDP-11 vs Motorola 68x vs x86. And so the C standard says that certain things are syntactically valid but the behavior can depend on compiler, options, hardware, and the day of the week. For example according to the standard dereferencing a null pointer is undefined. It may segfault; silently appear to succeed but actually return garbage; update your Facebook page with your masturbation frequency; or none of the above. It's all good.
Why should you care about portability if you aren't planning to port? Well, for one thing, undefined behavior can vary across versions of the same compiler or even different compiler options so planning not to port is not a realistic option. For another, even when you think you know what is going to happen on your target platform you might be wrong in some cases - some security critical cases, even. And finally, if you're good at hacking C there's a good chance you'll someday be called upon to write code for some device with a very different architecture. You might as well get into good habits now.
NULLity
While I was unable to find an example of Facebook updating I was able to find examples of two other distinct null pointer behaviors. The following program derefs a null:
#include <stdio.h>
int main() {
puts(NULL);
}
On one common platform the program caused a segfault as you might expect. But on a second, mystery platform (to be revealed shortly) the program ran to "successful" completion with the output "║►╤". Yup, garbage, not crash.
Here's another one. The standard says that a null pointer can textually be written with a 0 in cases where the compiler can statically prove that you want a pointer, but it does not guarantee that a null pointer will in fact consist of a bit pattern of all zeros. So if you declare
const int n = 0;
Then the following will always be true
NULL == 0;
n == 0;
But the following might or might not be true
NULL == (void *)n;
(int)NULL == n;
(int)NULL == 0;
Unfortunately, it's a bit difficult to find a platform where the null pointer is not a pattern of 0 bits so this non-portable code is surprisingly portable, but see the comp.lang.c FAQ.
Good Math, Bad Pointer Math
Here is an example of some code that does behave very, very differently.
#include <stdio.h>
int main() {
const unsigned long base = 0xfffc;
/* table header */
printf("\t\tlong\t\tchar *\n");
/* size of data types */
printf("size\t\t%d\t\t%d\n", sizeof(base), sizeof((char *)base));
/* the core table */
for (unsigned long i = 0; i < 8; i++) {
unsigned long n = base + i;
char *p = (char *)base + i;
printf("%08lx + %ld\t%08lx\t%08lx\n", base, i, n, p);
}
}
C aficionados will be horrified that I'm using non-portable %08lx instead of portable %p to output a pointer, but using %p would reveal too much. On the first, common platform the result was
long char *
size 4 4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 00010000
0000fffc + 5 00010001 00010001
0000fffc + 6 00010002 00010002
0000fffc + 7 00010003 00010003
So both longs and pointers are 4 bytes (32 bits) and both long and pointer math are pretty ordinary.
Now, on the second, mystery platform with one set of compiler options I get this
long char *
size 4 4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 10000000
0000fffc + 5 00010001 10000001
0000fffc + 6 00010002 10000002
0000fffc + 7 00010003 10000003
Still 32 bits but pointer math is, um, strange. 3 nibbles (12 bits) have been skipped. It gets perhaps weirder with a different set of compiler options on the mystery platform.
long char *
size 4 4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 00000000
0000fffc + 5 00010001 00000001
0000fffc + 6 00010002 00000002
0000fffc + 7 00010003 00000003
So now my 32 bit pointer is wrapping around at a mere 16 bits. And that's a clue to our mystery: we're kicking it old skool.
Mystery Revealed
For my first, common platform I'm using "gcc -std=c99" on 64 bit Ubuntu x86 but compiling using "-m32" to force 32 bit compilation. The second, mystery platform is the Watcom C compiler in C99 mode (wcc -za99) on the FreeDOS MS-DOS compatible operating system running in a VirtualBox virtual machine.
Using DOS keeps the machine in "real mode" - which basically means the chip (or virtual chip) behaves as an 8086[1]. Now, the 8086 is a peculiar beast. It's a 16 bit chip - registers are all 16 bits and most instructions work with 16 bit words - but it has a 20 bit memory address width. How do you squeeze 20 bits into a 16 bit word? Well, internally the 8086 uses a highly optimized form of data compression based on lzw that...
No, wait, that's not right. The answer is that the 8086 un-squeezes 20 bits into 32 bits - two 16 bit words instead of one. But here's where it gets weird. Instead of saying "we'll allow 32 bits of addressing but make it a hardware fault if anything above the low 20 bits is set" the 8086 designers said, well, this:
XXXX0 the "segment" shifted left by 4 bits
+ XXXX the "offset"
-------
XXXXX final 20 bit address
A segment register is shifted left 4 bits and then added to an offset register to get an actual address. So now it's clear why 0x0000fffc + 4 = 0x10000000 with one set of compiler flags: the segment has been incremented by 1 in the bits that count[2]. But why is the result 0x00000000 with a different set of compiler flags?
In order to get the 0x0000fffc + 4 = 0x10000000 behavior the machine code emitted by the compiler has to keep track of overflow - so every pointer add is not just a register add, but a register add plus an overflow check and perhaps a second register update. There may also be a move of two registers to/from memory instead of just one. That could be quite a performance hit on a lowly 4.77MHz machine (the original clock speed on the IBM PC).
So, as an optimization, most C compilers for 8086 real mode allow you to say that no structure is ever going to be larger than 2^16 bytes (64K) so that the emitted code doesn't always need to do all those gymnastics. The upshot: pointers silently overflow at 16 bits. 0x0000fffc + 4 = 0x00000000 and 0x1000fffc + 4 = 0x10000000.
If you're playing along at home the first result was achieved using the -mh ("huge" memory model) compiler flag and the second one was with -ml ("large" memory model). Both were linked with "wlink name test file test.obj". For more about the gory details of real mode memory models see 16 bit memory models[3].
The Standard
Back to earth now. The C standard is fine with all of the above behavior. As a gross oversimplification the standard says that
- A pointer must be NULL or must point at something valid or must point just past the end of something valid. Pointer math that would result in a pointer that is outside of those bounds is undefined. It might work like any of my examples, or it might put a phone order in for pizza.
- A dereference of a NULL pointer or a pointer does not point at something valid, even if the pointer is only pointing one step past the end of something valid, is also undefined. You pray for a crash.
In short, the C standard doesn't pretend that your program is one giant array. It pretends each individual allocation creates island unto itself (and may be part of a larger island or composed of smaller islands). The C standard is pretty quiet about what can happen between those islands. It's just that in modern platforms memory tends to behave like one giant flat mutable array so it's easy to assume that that's part of the C model.
One way to look at it is that the hypothetical C standard memory model treats pointers somewhat the way higher level languages treat references and array indexing, except that those higher level languages define bad behavior as either being statically impossible (no pointer math) or dynamically checked (null pointer exceptions and array index checks) where the C standard just says "undefined."
Obviously looping around some arbitrary pointer location is not going to be portable, but this stuff has less obvious consequences. Code like the following is not guaranteed to work
char *buf = malloc(BUF_SIZE);
char *end_of_buf = buf + BUF_SIZE;
...
/* check for buffer overrun */
if (buf + offset >= end_of_buf) {
/* buffer overrun */
} else {
/* everything is okay */
}
As the loop above showed, buf + offset might in fact be less than buf on an overrun. The correct answer is to compare offset with BUF_SIZE.
In Defense of Undefined
When people explain C's choices it's common to say that it's about implementation efficiency. Indeed that's a huge part of it. But remember that C was specifically designed to allow very low level access; to allow much (though by no means all) of what assembly allows. So C's ability to write to arbitrary bits of memory allows useful things like talking to memory mapped devices. That kind of code is not portable, but there are important bits of some kinds of programs that aren't going to be portable no matter what you do. The trick is to keep those bits walled off and as small as possible.
So be careful with your pointers and I'll see all you former Java programmers in the unemployment line as I drive by in my new car with the custom plates "PTRMATH%saccount=34389983829 password=h3ll0k1ttySegmentation fault.
Footnotes
- Yes, I'm keeping a "virtual" machine in "real" mode. I'm just as confused as you are.
- Actually, one consequence of this funky scheme is that the same location in memory can be addressed multiple ways. 0x00120 + 0x0003 = 0x00123 and 0x00100 + 0x00023 = 0x00123. It's just that in the huge memory model the compiler's emitted code for pointer math keeps things canonical by using only the highest 4 bits of the segment.
- For even more weirdness associated with this segmentation stuff read a bit about the A20 Line, something that causes heartburn to bootloader writers to this day.