Monday, April 12, 2010

C Is Not Assembly

In my last article I dug down into some of the joys of undefined pointer behavior in C. But I did it in the context of an architecture that most developers aren't too likely to ever see again, the Intel 8086. I wanted to show that this stuff matters even with more mainstream architectures because compilers are free to do a lot of non-obvious things. C is not assembly.

The United States Computer Emergency Readiness Team (US-CERT) "is charged with providing response support and defense against cyber attacks for the Federal Civil Executive Branch (.gov) and information sharing and collaboration with state and local government, industry and international partners."

With a U.S. Department of Homeland Security badge on their page you know they're serious. When you overflow your buffers the terrorists win. So you'd think they'd take C seriously.

I found a real WTF gem caused by programmers treating C like assembly. Vulnerability Note VU#162289

"In the C language, given the following types:

        char *buf;
        int len;

some C compilers will assume that buf+len >= buf. As a result, code that performs wrapping checks similar to the following:

len = 1<<30;
[...]
if(buf+len < buf)  /* wrap check */
  [...overflow occurred...]

are optimized out by these compilers; no object code to perform the check will appear in the resulting executable program. In the case where the wrap test expression is optimized out, a subsequent manipulation of len could cause an overflow. As a result, applications that perform such checks may be vulnerable to buffer overflows."

The advisory is careful to admit that compilers are free to do just that. Here's why: greatly simplified, the C standard says a pointer must point at a valid object or just past the end. Any pointer arithmetic that might cause a pointer to step outside those bounds yields undefined behavior. Now, there's some code missing from their snippet, but I have to assume that the compiler knows that buf points at the beginning of something and not into the middle. So by definition either buf + len >= buf or the program is free to do anything up to and including launching shoulder mounted kitten missiles at Capitol Hill.

Still, there is a WTF to lay on the compiler writer here. If a programmer writes an "if" test then presumably he or she had some reason to believe that sometimes the test might be true. Before optimizing away the conditional the compiler really should have issued a warning.

In order for this code to have any hope of working a few assumptions must hold: sizeof(int) <= sizeof(char *), overflow must happen "silently", etc. But there's another major assumption here: the buffer pointed to by buf must be located at the end of its address space. With a check like this, if there are any objects located higher in the same overflow segment then those objects are getting some kitten missiles. So another WTF is a developer making an assumption about how a compiler works in the face of undefined code.

Now, there are a few scenarios where all these assumptions might be justified. For instance, if you're targeting some special purpose embedded device then memory layout might be well understood. In such situations, the optimization performed by the compiler might be shocking indeed, even if technically permissible.

The problem is that the developer is thinking at the assembly code level but the C standard says the compiler doesn't have to "think" the same way. In assembly the distance between what you write and the object code generated is pretty small (barring some kind of complicated macro magic). In C the distance is quite a bit larger.

Repeat after me, C is not assembly.

The Solution?

In my mind the real WTF is the US-CERT's proposed solution.

"Cast objects of type char* to uintptr_t before comparison. The faulty wrapping check listed above would be written

#include <stdint.h>
[...]
if((uintptr_t)buf+len < (uintptr_t)buf)
   [...]

Alternatively, developers can use size_t on platforms that do not provide the uintptr_t type."

Yes, that's right, the proposed solution is to replace one set of undefined behavior with another. Well...WTF? Didn't we get here because the compiler was free to do anything in the face of undefined semantics?

C is not assembly. Not even when you do some casting. Pointers might be segmented in such a way that overflow behavior for pointers is totally different from that of integral types. Or sizeof(int) > sizeof(uintptr_t). Or the cast to an integral type could do anything at all like invert the bits. Or shoulder mounted kitten missiles. Whatever.

The Real Solution

The best solution is "don't do that." Give the buffer a size and compare len to that size. The assumption that the buffer is the last object in its address space is just playing with fire. But I'll assume they really did need to use memory at the end of the address space. Fine. What's the solution?

It's this: you can't say "end of address space" portably in C so tread carefully when you use undefined behavior. Minimize the amount of undefined behavior you depend on. Pull the non-portable stuff out into separate compilation units and document that the assumptions. Maybe even write the non-portable stuff in assembly where you know what's going to happen, but at least write a unit test that ensures that the compiler behaves the way you think it does. And always, always remember that C is not assembly.

Thursday, April 8, 2010

Good Math, Bad Pointer Math

Heard about the new C bandwagon? According to April's Tiobe report C has eclipsed Java as the number 1 most popular language by a whopping 0.007%. So while all the rest of you Java suckers have just lost all professional relevance, I'm OK. I cut my professional teeth on C.

Sometime ago on Twitter I gave in-jest advice to new C programmers: all your code and data are in one giant mutable array indexed by pointers, good luck. David MacIver (http://twitter.com/drmaciver) responded that the C standard didn't say that. Whereupon I swore a mighty blood oath of vengeance upon him for ruining my joke with facts.

Vengeance is hard so I'll get started later, maybe tomorrow or next week. Right now I'll write a bit about the facts behind pointers. One of the trickiest aspects of writing portable C programs is that a great deal of the specification says that the behavior of certain constructs is undefined. Old C hands sometimes point out some undefined behavior in a C newbie's program whereupon the newbie fires up gcc or msvc or whatever and shows that the code behaves exactly as it "should." Who's right? What does it mean to be undefined?

C was designed to be both 1) portable across a wide range of platforms and 2) work at a very low level. Those two goals are pretty much in opposition - low level details can vary widely across platforms. Don't think MacIntel vs Wintel, think IBM System/360 vs DEC PDP-11 vs Motorola 68x vs x86. And so the C standard says that certain things are syntactically valid but the behavior can depend on compiler, options, hardware, and the day of the week. For example according to the standard dereferencing a null pointer is undefined. It may segfault; silently appear to succeed but actually return garbage; update your Facebook page with your masturbation frequency; or none of the above. It's all good.

Why should you care about portability if you aren't planning to port? Well, for one thing, undefined behavior can vary across versions of the same compiler or even different compiler options so planning not to port is not a realistic option. For another, even when you think you know what is going to happen on your target platform you might be wrong in some cases - some security critical cases, even. And finally, if you're good at hacking C there's a good chance you'll someday be called upon to write code for some device with a very different architecture. You might as well get into good habits now.

NULLity

While I was unable to find an example of Facebook updating I was able to find examples of two other distinct null pointer behaviors. The following program derefs a null:

#include <stdio.h>

int main() {
   puts(NULL);
}

On one common platform the program caused a segfault as you might expect. But on a second, mystery platform (to be revealed shortly) the program ran to "successful" completion with the output "║►╤". Yup, garbage, not crash.

Here's another one. The standard says that a null pointer can textually be written with a 0 in cases where the compiler can statically prove that you want a pointer, but it does not guarantee that a null pointer will in fact consist of a bit pattern of all zeros. So if you declare

const int n = 0;

Then the following will always be true

NULL == 0;
n == 0;

But the following might or might not be true

NULL == (void *)n;
(int)NULL == n;
(int)NULL == 0;

Unfortunately, it's a bit difficult to find a platform where the null pointer is not a pattern of 0 bits so this non-portable code is surprisingly portable, but see the comp.lang.c FAQ.

Good Math, Bad Pointer Math

Here is an example of some code that does behave very, very differently.

#include <stdio.h>

int main() {
   const unsigned long base = 0xfffc;

   /* table header */
   printf("\t\tlong\t\tchar *\n");

   /* size of data types */
   printf("size\t\t%d\t\t%d\n", sizeof(base), sizeof((char *)base));

   /* the core table */
   for (unsigned long i = 0; i < 8; i++) {
      unsigned long n = base + i;
      char *p = (char *)base + i;
      printf("%08lx + %ld\t%08lx\t%08lx\n", base, i, n, p);
   }
}

C aficionados will be horrified that I'm using non-portable %08lx instead of portable %p to output a pointer, but using %p would reveal too much. On the first, common platform the result was

  long  char *
size  4  4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 00010000
0000fffc + 5 00010001 00010001
0000fffc + 6 00010002 00010002
0000fffc + 7 00010003 00010003

So both longs and pointers are 4 bytes (32 bits) and both long and pointer math are pretty ordinary.

Now, on the second, mystery platform with one set of compiler options I get this

  long  char *
size  4  4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 10000000
0000fffc + 5 00010001 10000001
0000fffc + 6 00010002 10000002
0000fffc + 7 00010003 10000003

Still 32 bits but pointer math is, um, strange. 3 nibbles (12 bits) have been skipped. It gets perhaps weirder with a different set of compiler options on the mystery platform.

  long  char *
size  4  4
0000fffc + 0 0000fffc 0000fffc
0000fffc + 1 0000fffd 0000fffd
0000fffc + 2 0000fffe 0000fffe
0000fffc + 3 0000ffff 0000ffff
0000fffc + 4 00010000 00000000
0000fffc + 5 00010001 00000001
0000fffc + 6 00010002 00000002
0000fffc + 7 00010003 00000003

So now my 32 bit pointer is wrapping around at a mere 16 bits. And that's a clue to our mystery: we're kicking it old skool.

Mystery Revealed

For my first, common platform I'm using "gcc -std=c99" on 64 bit Ubuntu x86 but compiling using "-m32" to force 32 bit compilation. The second, mystery platform is the Watcom C compiler in C99 mode (wcc -za99) on the FreeDOS MS-DOS compatible operating system running in a VirtualBox virtual machine.

Using DOS keeps the machine in "real mode" - which basically means the chip (or virtual chip) behaves as an 8086[1]. Now, the 8086 is a peculiar beast. It's a 16 bit chip - registers are all 16 bits and most instructions work with 16 bit words - but it has a 20 bit memory address width. How do you squeeze 20 bits into a 16 bit word? Well, internally the 8086 uses a highly optimized form of data compression based on lzw that...

No, wait, that's not right. The answer is that the 8086 un-squeezes 20 bits into 32 bits - two 16 bit words instead of one. But here's where it gets weird. Instead of saying "we'll allow 32 bits of addressing but make it a hardware fault if anything above the low 20 bits is set" the 8086 designers said, well, this:

  XXXX0  the "segment" shifted left by 4 bits
+  XXXX  the "offset"
-------
  XXXXX  final 20 bit address

A segment register is shifted left 4 bits and then added to an offset register to get an actual address. So now it's clear why 0x0000fffc + 4 = 0x10000000 with one set of compiler flags: the segment has been incremented by 1 in the bits that count[2]. But why is the result 0x00000000 with a different set of compiler flags?

In order to get the 0x0000fffc + 4 = 0x10000000 behavior the machine code emitted by the compiler has to keep track of overflow - so every pointer add is not just a register add, but a register add plus an overflow check and perhaps a second register update. There may also be a move of two registers to/from memory instead of just one. That could be quite a performance hit on a lowly 4.77MHz machine (the original clock speed on the IBM PC).

So, as an optimization, most C compilers for 8086 real mode allow you to say that no structure is ever going to be larger than 2^16 bytes (64K) so that the emitted code doesn't always need to do all those gymnastics. The upshot: pointers silently overflow at 16 bits. 0x0000fffc + 4 = 0x00000000 and 0x1000fffc + 4 = 0x10000000.

If you're playing along at home the first result was achieved using the -mh ("huge" memory model) compiler flag and the second one was with -ml ("large" memory model). Both were linked with "wlink name test file test.obj". For more about the gory details of real mode memory models see 16 bit memory models[3].

The Standard

Back to earth now. The C standard is fine with all of the above behavior. As a gross oversimplification the standard says that

  1. A pointer must be NULL or must point at something valid or must point just past the end of something valid. Pointer math that would result in a pointer that is outside of those bounds is undefined. It might work like any of my examples, or it might put a phone order in for pizza.
  2. A dereference of a NULL pointer or a pointer does not point at something valid, even if the pointer is only pointing one step past the end of something valid, is also undefined. You pray for a crash.

In short, the C standard doesn't pretend that your program is one giant array. It pretends each individual allocation creates island unto itself (and may be part of a larger island or composed of smaller islands). The C standard is pretty quiet about what can happen between those islands. It's just that in modern platforms memory tends to behave like one giant flat mutable array so it's easy to assume that that's part of the C model.

One way to look at it is that the hypothetical C standard memory model treats pointers somewhat the way higher level languages treat references and array indexing, except that those higher level languages define bad behavior as either being statically impossible (no pointer math) or dynamically checked (null pointer exceptions and array index checks) where the C standard just says "undefined."

Obviously looping around some arbitrary pointer location is not going to be portable, but this stuff has less obvious consequences. Code like the following is not guaranteed to work

char *buf = malloc(BUF_SIZE);
char *end_of_buf = buf + BUF_SIZE;

...

/* check for buffer overrun */
if (buf + offset >= end_of_buf) {
  /* buffer overrun */
} else {
  /* everything is okay */
}

As the loop above showed, buf + offset might in fact be less than buf on an overrun. The correct answer is to compare offset with BUF_SIZE.

In Defense of Undefined

When people explain C's choices it's common to say that it's about implementation efficiency. Indeed that's a huge part of it. But remember that C was specifically designed to allow very low level access; to allow much (though by no means all) of what assembly allows. So C's ability to write to arbitrary bits of memory allows useful things like talking to memory mapped devices. That kind of code is not portable, but there are important bits of some kinds of programs that aren't going to be portable no matter what you do. The trick is to keep those bits walled off and as small as possible.

So be careful with your pointers and I'll see all you former Java programmers in the unemployment line as I drive by in my new car with the custom plates "PTRMATH%saccount=34389983829 password=h3ll0k1ttySegmentation fault.

Footnotes

  1. Yes, I'm keeping a "virtual" machine in "real" mode. I'm just as confused as you are.
  2. Actually, one consequence of this funky scheme is that the same location in memory can be addressed multiple ways. 0x00120 + 0x0003 = 0x00123 and 0x00100 + 0x00023 = 0x00123. It's just that in the huge memory model the compiler's emitted code for pointer math keeps things canonical by using only the highest 4 bits of the segment.
  3. For even more weirdness associated with this segmentation stuff read a bit about the A20 Line, something that causes heartburn to bootloader writers to this day.