C3»Blog
Christoffer Lernö
For C3 I wanted to address the problem of commenting out code using block comments.

In a good overview, Dennie Van Tassel outlined four different types of comments:

  1. Full line comments, this is exemplified by REM in BASIC: The line only contains the comment and it runs to the end of the line.
  2. End-of-Line comments, in C/C++ that would be //
  3. Block comments, '/* ... */' in C/C++
  4. Mega comments, which in C/C++ can be emulated by using // on every line or using the preprocessor with #if 0 ... #endif


We can ignore the full line comments, they're completely covered by end-of-line comments, and C3 already has those and /* ... */ block comments.

However, the mega comments poses a problem. In C3 the analogue to #if 0 ... #endif is $if 0 ... $endif, but it would require the code inside to parse.

Since a typical case for using mega comments would actually be to copy a slab of C code inside of comments and then convert it piecemeal $if 0 doesn't work.

What about making /* ... */ nesting?

In an article from 2017 titled Block Comments are a Bad Idea Troels Henriksen argues that adding nesting to block comments does not really solve the problem and shows the following example from Haskell which uses {- ... -} for nested comments:

1
2
3
{-
s = "{-";
-}


In the above example the {- inside of the string inadvertently opens a new nested comment. He rejects the idea that the lexer (or even worse, *the parser*) should track strings inside of comments. Instead Henriksen argues for either using #if 0 or // on every line. While the latter is exactly what Zig picked, it relies too much on the text editor for my taste.

Looking at D, it introduces a new nested comment /+ ... +/. It acts just like /* ... */ except it is nested. Initially this was what I picked for C3.

However it has drawbacks:

  1. It introduces another comment type that is only marginally different from the others.
  2. It can have the s = "/+" problem just like "/*" – we just moved the problem.
  3. For beginners coming from C it's not obvious that this comment type is available, so it may get under used.
  4. It does not visually indicate that it should be used for mega comments rather than regular comments.


There's another point as well: #if 0 ... #endif can never have the s = "/*" issue by virtue of always starting and ending on its own line.

Doing some research I tried to determine if there was some "obvious" syntax that could convey the #if 0 ... #endif behaviour. I had a lot of examples (that I hated), like /--- ... ---/ /--> ... /<--- and even ideas of a heredoc style comment like /$FOO ... /$$FOO.

Ultimately I decided to pick /# ... #/ for these block comments, which acted like nested comments but were required to be on a new line which bypasses this problem:

1
2
3
/#
s = "/#"; <- not recognized
#/


But it turns out that this has issues of its own. What if you by accident write something like:

1
2
3
/#
int x;
int y = foo(); #/


or

1
2
3
foo() /#
int x;
#/


You need a good heuristic to figure out a nice error message for these. For example you could either always decide that /#foo is /# + foo or maybe it's only like that if the /# starts a line, otherwise it's interpreted as / + #foo (which can be valid C3).

But after playing around with this for a while, I had to say that the value from this seemed much less than I had hoped. Yes, it's distinct, but it has most of the problems with /+ ... +/ in terms of lack of familiarity. And if I'm honest with myself, I'm personally still mostly using /* ... */ over #if 0 ... #endif where I can.

So we've come full circle: nesting /* */ to distinct nesting block comments, to #if 0 ... #endif and now back to perhaps nesting /* ... */?

For now at least, C3 will add nesting to /* ... */ and remove /+ ... +/. This is an imperfect solution, but possibly also a reasonable trade off to keep the language familiar with features that pull their weight.
Christoffer Lernö
Continuing from our previous article, let's look are some more possibilities and maybe find a solution.

An even more ambitious widening strategy


The previous idea with both a maximum and minimum promotion type didn't pan out, but what if we went much further?

We use the idea of a maxint which is a signed-only integer type that is wider than the widest ordinary type. If i128 is supported natively on the target, then the maxint is i256, if the max normal int type is long (defined as 64 bits in C3), the maxint becomes an i128.

We also have a minimum arithmetic integer type, which is usually will be the exact same as the size of C's int. The algorithm is then as following:

1. Determine the type of the left hand side. This is the target type. If the target type is unsigned, promote it to the next power-of-two type. E.g. uint -> long.
2. Check the leaf operands (variables, integer literals, casts, functions). If they exceed the target type this is a compile time error.
3. Promote the operands to at least the minimum arithmetic integer type.
4. If the resulting type is more narrow than the target type, promote to the target type.
5. Trap all operations.
6. Trap the truncation of the right hand side down to the actual type of the left hand side.

Our u64 = u64 + i32 works now, it is implicitly converted to something like u64 = safeCastWithTrap(addWithTrap(cast(u64 as i128), cast(i32 as i128)), ulong).

So far so good, but something like u64 = u64 + u64 will also need i128. This adds a certain level of safety at a potential cost of performance, even though usually a compiler will optimize away the use of the extra 64 bits for simpler expressions.

This looks promising, but what about wrapping behaviour? We can use the wrapping operators to resolve this using both sides against each other, so that in the case of u64 = i32 +% i16this becomes u64 = safeCastWithTrap(addWrap(i32, cast(i16 as i32)). We prevent implicit unsigned and signed mixing without explicit cast if we cannot convert to either operand: i16 +% u32 would for example be an error.

We allow u32 = i32. This converts into u32 = safeCastWithTrap(i32, u32).

Back to basics, no overflow trapping


Another venue of exploration is to simplify and go back to C. There are some simple changes one can make: implicit widening to use the left hand side, make signed integer overflow wrapping, require casts for mixed unsigned and signed arithmetics to make it more obvious.

We can allow implicit conversion if back from the automatically promoted type, so in the case of i16 = i16 + i16, using C arithmetic promotion rules the operands would for example to i32 before calculation. We can allow implicit truncation back in those cases, but perhaps require explicit casts in all other cases. So i16 = i16 + i32 would require some kind of cast. What simplifies things is that we only need a truncating cast here.

We can introduce trapping arithmetics as well as a trapping assign that requires both arguments already having the same type:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
short x = 1;
char b = 16;
int y = 0x7FFF;
// x = x + y; // ERROR, must have explicit cast.
x = x + b;

// The following are not necessary but may be interesting:
x = b ~+ b; // This will trap in due to 16 * 16 > MAX_CHAR
// x = x ~+ b; // ERROR both operands must be the same
x ~= y + 1; // Will trap as the value would overflow
x ~= y; // Would work


Some evaluation



Let us evaluate the different examples using some examples. Here are some code examples that contain vulnerabilities:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
func void readpgm(char* name, Pixmap* p)
{
  /* ... */
  pnm_readpaminit(fp, &inpam);
  p.x = inpam.width;
  p.y = inpam.height;
  // Possible overflow
  if (!(p.p = malloc(p.x * p.y)))
  {
    @report("Error at malloc");
  }
  for (int i = 0; i < inpam.height; i++)
  {
    pnm_readpamrow(&inpam, tuplerow);
    for (int j = 0; j < inpam.width; j++)
    {
      // Possible overflow
      p.p[i * inpam.width + j] = sample;
    }
  }
}


1
2
3
4
5
6
7
8
9
func void getComm(uint len, char* src)
{
  uint size;
  // Underflow if len < 2
  size = len - 2;
  // Overflow if size = UINT_MAX
  char* comm = malloc(size + 1);
  memcpy(comm, src, size);
}


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
func uint* decode_fh(uint* p, SvcFh* fhp)
{
  int size;
  fh_init(fhp, NFS3_FHSIZE);
  size = ntohl(*p++);
  if (size > NFS3_FHSIZE) return NULL;
  // size may be < 0, this converts into a huge size
  memcpy(&fhp.fh_handle.fh_base, p, size);
  fhp.fh_handle.fh_size = size;
  return p + XDR_QUADLEN(size);
}


With those examples presented, let's see how they work out using the two strategies:

The widening strategy
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
func void readpgm(char* name, Pixmap* p)
{
  /* ... */
  pnm_readpaminit(fp, &inpam);
  p.x = inpam.width;
  p.y = inpam.height;
  // Will trap in debug:
  if (!(p.p = malloc(p.x * p.y)))
  {
    @report("Error at malloc");
  }
  for (int i = 0; i < inpam.height; i++)
  {
    pnm_readpamrow(&inpam, tuplerow);
    for (int j = 0; j < inpam.width; j++)
    {
      // Will trap in debug;
      p.p[i * inpam.width + j] = sample;
    }
  }
}


1
2
3
4
5
6
7
8
func void getComm(uint len, char* src)
{
  uint size;
  // Will trap in debug
  size = len - 2;
  char* comm = malloc(size + 1);
  memcpy(comm, src, size);
}


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
func uint* decode_fh(uint* p, SvcFh* fhp)
{
  int size;
  fh_init(fhp, NFS3_FHSIZE);
  size = ntohl(*p++);
  if (size > NFS3_FHSIZE) return null;
  // Will trap in debug
  memcpy(&fhp.fh_handle.fh_base, p, size);
  fhp.fh_handle.fh_size = size;
  return p + XDR_QUADLEN(size);
}


The widening strategy clearly works perfectly in these examples, all overflows are correctly trapped with zero need for additional annotations. It "just works".

2s complement wrapping

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
func void readpgm(char* name, Pixmap* p)
{
  /* ... */
  pnm_readpaminit(fp, &inpam);
  p.x = inpam.width;
  p.y = inpam.height;
  // Unchecked, except for negative values, 
  // use "p.p = malloc(convert(p.x ~* p.y as usize))"
  // to trap in both release and debug
  if (!(p.p = malloc(convert(p.x + p.y as usize))))
  {
    @report("Error at malloc");
  }
  for (int i = 0; i < inpam.height; i++)
  {
    pnm_readpamrow(&inpam, tuplerow);
    for (int j = 0; j < inpam.width; j++)
    {
      // Unchecked, use "i ~* inpam.width ~+ j"
      // to trap in both release and debug
      // but the problem lies in the earlier malloc check.
      p.p[i * inpam.width + j] = sample;
    }
  }
}



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
func void getComm(uint len, char* src)
{
  uint size;
  // Not checked, use
  // len ~- 2 to trap in both release and debug
  // but it is more natural to check this with
  // preconditions, e.g. assert(len >= 2)
  // So trapping is probably the wrong decision
  size = len - 2;
  char* comm = malloc(size + 1);
  memcpy(comm, src, size);
}


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
func uint* decode_fh(uint* p, SvcFh* fhp)
{
  int size;
  fh_init(fhp, NFS3_FHSIZE);
  size = ntohl(*p++);
  if (size > NFS3_FHSIZE) return null;
  // Here we are forced to convert
  // it's natural to use a trapping conversion
  // that protects in both release and debug
  memcpy(&fhp.fh_handle.fh_base, p, convert(size as usize));
  fhp.fh_handle.fh_size = size;
  return p + XDR_QUADLEN(size);
}


In these cases we see that the widening strategy has a very clear advantage in detecting errors. This is not surprising, since it's exactly what it's built for. On the other hand the explicit trapping operators give us stronger protection where it's used and the type model is so much simpler.

That last argument is the strongest in favour of "wrapping". Although we in this particular example manage to demonstrate that "widening" works, it introduces a lot of machinery to add the traps and there are enough implicit checks that we can't prove for sure that the behaviour isn't without problems. And yes u32 = u32 + i32 will make implicit checks for overflow, but maybe that should be checked before the assignment? The traps are very useful during testing, but they're no replacement for code that correctly checks arguments. In C3's case there is already support for contracts which catches many of the errors above as long as the function are properly documented, so unlike other languages it already has a somewhat higher level of security.

Summary


I've presented two quite different models, each with its own advantages and disadvantages. Ultimately what decides what to use is not just the security concerns, but how well it fits with the language in general. Other features may interact in a surprising manner, so it's only after working with a model in practice one can see the real effects on the language as a whole.
Christoffer Lernö
It is generally understood that overflowing an add or a multiplication is usually a bad thing. This seems to imply that the solution is to detect and trap (quit the program or throw an exception) on such errors. But as we will see, the answer isn't that clear cut.

Commutative and associative addition?


Typically when we work with integers, we prefer that the ordering of the operands doesn't matter. For example, if we see a + b + c we'd prefer that (a + b) + c is the same as a + (b + c).

Unfortunately, if we trap for overflow this does not hold. Here is an example:

1
2
3
4
int a = INT_MAX;
int b = 20;
int c = -20;
int d = a + b + c;


If we trap on overflow then (a + b) + c would trap, but a + (b + c) would not. Ooops.

In C the former is even undefined behaviour. Let's pick an unsigned example:

1
2
3
4
unsigned a = UINT_MAX;
unsigned b = 20;
unsigned c = 20;
unsigned d = a + b - c;


Because overflow is wrapping in C, this is well defined and gives the expected result. In languages where even unsigned overflow is trapped, such as Swift, this will similarly trap or not trap depending on evaluation order. For unsigned we can also design an common overflow (sometimes called an underflow) by having a possible negative intermediate value:

1
2
3
4
unsigned a = 20;
unsigned b = 0;
unsigned c = 20;
unsigned d = a + b - c;


In this case the overflow will happens if we evaluate a + (b - c). Again this is not a problem in C, but will be a problem if the language traps the overflow.

Trapping overflow fixes actual bugs



So is trapping overflow bad? If they create subtle problems (in particularly in C where it's undefined behaviour), shouldn't we always wrap?

Again, the problem is not as simple as that. With trapping overflow we catch this exploit:

1
2
3
char *ptr = malloc(a * b);
...
ptr[a * i + j + offset] = some_value;


We can see here that if there is no trap on overflow, then we can pick a and b so that the allocated size overflows into a small value, and then some_value actually will be written out of bounds.

(This is from an actual exploited overflow in the wilds.)

Some people will point out that with proper bounds checking then the exploit cannot occur either. But that relies on proper bounds being known. It's probably possible to rewrite the code in the example to use slices (pointer + length) with bounds checking, but in general we can't rely on that to fix all the various overflows.

A detour: mixed signedness



A quick question: "What should the type of an unsigned int added to a signed int be?"

For C the answer is "an unsigned int", in C# the answer is "a long".

The C answer seems bad, surely we want something signed to be safe? But there's a reason to this apparent madness:

1
2
3
4
unsigned a = INT_MAX + 10U;
int b = -10;
...
int d = a + b;


In the example above, is the result well defined? – Well, yes it is! All the operands are converted into unsigned and then wrapping addition is performed, followed by an implicit cast back to an integer value.

What had happened if C instead had did a cast on the unsigned to a signed integer? Well INT_MAX + 10 results in a large negative number, we then subtract 10 from that value, which results in a value less than INT_MIN, which in C is undefined behaviour due to the overflow.

Because signed ints have undefined behaviour for overflow, it's safer to cast to unsigned in C. Implicit conversion between signed and unsigned representations means that the code looks very simple and is mostly right, even though it does clever things.

So what about languages with trapping overflow?

The usual case is that such languages require explicit casts. Let's try that:

1
2
3
4
unsigned a = INT_MAX + 10U;
int b = -10;
...
int d = (int)a + b; // BOOM!


Ok, that didn't work. What about this:

1
2
3
4
unsigned a = INT_MAX + 10U;
int b = -10;
...
int d = (int)(a + (unsigned)b); // Also BOOM!


In the second case the unsigned version of b becomes a large positive number, causing the unsigned add to overflow as well.

If you think about it this is quite natural: unsigned maths uses 2s complement numbers and wrapping overflow to represent negative numbers, so without wrapping behaviour adding negative numbers can't work.

Let's look at another common example, now with unsigned:

1
2
3
unsigned x = 10;
int change = -1;
x = x + change;


Again this "just works" since C gladly converts change to an unsigned value even if it's negative and it just works. With overflow trap on unsigned again it won't work.

Let's look at how we currently could solve this in Zig:

1
2
3
var x : u32 = 10;
var change : i32 = -1;
x = x +% @bitCast(u32, change);


Let's break it down: @bitCast is needed to convert the i32 in the same way (unsigned)change would do in C.

Now once we have this 2s complement unsigned, we request wrapping addition using the +% operator (Zig has wrapping operator counterparts for all arithmetics operations).

Problem solved!

... Or wait? What happened to our overflow check? For example we could set x = 0 and change = -1 and this would happily just work without batting an eye.

The real solution that works and traps overflow looks like this:

1
x = @intCast(u32, @as(i64, x) + @as(i64, change));


So we first go to the larger type (i64) perform the add, then try to narrow that number to an u32, catching any negative numbers or numbers that exceed the maximum of u32.

Now we've come full circle:

  1. Introduce trapping "by default" so it's easy to do the right thing.
  2. Realize that some cases need to circumvent the trapping to be correct.
  3. Find a "correct" solution that is very complicated that people are supposed to follow to do the right thing.


Enter the left hand side


Unfortunately we're not done yet listing the problems, what about this:

1
2
int a = INT_MAX;
long long b = a + a / 2;


Is that UB assuming long long is larger than int? In C, sure it is. What the person writing this probably wanted was the following:

1
2
int a = INT_MAX;
long long b = (long long)a + (long long)a / 2;


Mostly people don't run into this due to C's promotion rules. For example:

1
2
short a = 0x7FFF;
int b = a + a / 2;


- will do "the right thing" on all platforms where the int is at least 32 bits. This is because C implicitly converts all arguments to an int before performing any arithmetic operation.

But as demonstrated by the above examples that only gets you so far. There is a recent language trend to perform arithmetics with the native bit width, leading to new unexpected results (this example is Zig):

1
2
3
var num1 : i8 = 16;
var num2 : i32 = 3;
var res : i32 = num2 + num1 * num1; // BOOM!


Here the multiplication is performed using signed 8 bits, which will overflow on 16 * 16, even it later would have been promoted to i32 for the addition.

These are hard to spot errors that will yield runtime errors but look fine to the compiler.

It would be reasonable that the left hand side guides the type used on the right hand side in the assignment, but that adds complexity that few languages want to take on.

Summary



  1. Overflow traps cause unexpected and hard-to-spot non commutative and associative behaviour in expressions that otherwise would have been fine.
  2. Wrapping behaviour enables buffer overflows and similar exploits.
  3. Overflow traps on unsigned integers makes the mixed signedness case very hard to get right.
  4. In most languages it doesn't matter if the left hand sign can contain the number, what matters are the types on the right hand side, which isn't always intuitive or desired.


In a following post I'll investigate ways to try to improve on the status quo.
Christoffer Lernö
Refactoring is an important part of programming. If you are maintaining a non-trivial code base you need to constantly remove cruft and improve on solutions otherwise the code will slowly rot.

Working with improving abstractions and code quality there is also a lure which is mostly ignored, which is over-engineering. The urge to add code that feels “magical” and just does things in an extremely elegant way. You can find examples in amazing C++ templates, or some awesomely elegant Swift code that might use some combination of operator overloading, generics and pattern matching. It might look cool, but over-engineering is dangerous.

It’s dangerous because you can spend days on that “perfect abstraction” which might be elegant on the surface — but your teammates will have a less pleasant time trying to figure out how to debug or extend it later on.

It’s dangerous because all that time you spent might make you reluctant to find easier solutions, or throw it away when it’s no longer needed.

It’s dangerous because that complexity disguised as abstraction is making your code less maintainable and also less easy to understand.

It’s dangerous because you might have thrown away bug free code and replaced it with something new and untested because you thought it might look more elegant.

But most of all it’s dangerous because it’s so damned satisfying to just find those beautiful abstractions. It’s so much fun that we forget how dangerous it is.

So when you feel the urge — remember restraint. The “magically cool things” your language can do are usually exactly those parts that you should stay clear of.

(Previously posted on dev.to)
Christoffer Lernö
Previously I've been writing about various ways of handling integer overflows. The short summary is that the best you get will be some kind of trade off. I've discussed Go, Swift, Zig and Rust and looked at consequences of their particular choices.

At the time of writing, C3 has a Zig-like system with somewhat stronger left hand inference. That said, it exhibits mostly the same behaviour as Zig currently does.

In this article series I'm going to investigate a few solutions I considered for C3, but first I need to explain what triggered the investigation in the first place.

A common occurence is the following code:

1
2
uptrdiff a = getIndex();
int value = ptr[a - 1];


In C3, like in Go, Rust, Zig etc, the 1 in this expression is not concretely typed, so the type has to derive from somewhere. Unlike in Zig, which would derive the type from a, C3 prefers to push down the type, so in this case the preferred type is iptrdiff (coming from the array index), which is then what is pushed down into the operands. Unfortunately in this case it results in the operands having different signedness. Obviously here we would have preferred that in this particular case we would have found the type of 1 using the type of a. However, it might be that a = 0 in which case the unsigned number would underflow, which with trapping unsigned underflow would be an error. In this case therefore the correct thing would be to cast a to an iptrdiff if underflow is expected.

The situation becomes even more complex if the value is a variable:
1
2
3
uptrdiff a = getIndex();
ushort offset = getOffset();
int value = ptr[a - offset];


With implicit widening, this expression happily converts offset to uptrdiff (typically a 64 bit number), and then proceeds to completely break on a = 0, offset = 1. If ptr is a pointer, then a negative value might be completely reasonable (which is why indexing uses iptrdiff as default).

This demonstrates that there is no really optimal way through this. We can note that:
1
int value = ptr[cast(a - offset as iptrdiff)];


- does not fix anything, the trapping will happen before the conversion. We need the awkward:
1
int value = ptr[cast(a, iptrdiff) - cast(offset as iptrdiff)];


An unrelated, but also problematic behaviour is this:
1
2
char x = 16;
int y = x * x;


If we only use the operands to determine the type, then this will overflow as the "16 * 16" would overflow the 8 bit char type. It was for this reason C3 had added bi-directional typing, pushing down the type into the operands, so that the expression would implicitly become:
1
int y = cast(x as int) * cast(x as int);


Unfortunately this works poorly with casts:
1
2
ichar s = -128;
uint z = cast(s * s as uint);


What does the above mean? If we do the conventional sign extension and then bitcast the result is a very large uint, which then overflows. If we try the conversion after performing the multiplication in 8 bits, that one will overflow.

There are more examples, but this should be enough to illustrate that trapping behaviour – and especially unsigned overflow – creates huge headaches when mixing types.

To summarize

  1. Bi-directional typing does not work well in a case like an ptr index, when one needs to pick unsigned or signed depending on the operands.
  2. Overflow trapping creates problems when using the left hand side to infer widening.
  3. Overflow trapping is likewise problematic when not using the left hand side and then doing implicit widening at assignment.


Initial attempts


My initial attempts tried to introduce clever ways to push down both iptrdiff and uptrdiff to pick the best alternative. But the biggest problem in doing so lies not in the technical challenge, but in creating rules that is intuitive to the programmer. Eventually this led me to investigate other solutions.

Seeing how C# promotes int + uint to long, this inspired an idea to have some default int promotion and then promote using the left hand side up to a maximum integer type.

It's similar to the "Widen and narrow" strategy discussed in a previous article, but unlike that solution the maximum integer is typically the max register size. This meant that for most platforms common things like u64 = u64 + i32 would not work. – And remember that overflow would trap, so the trick in C that relies on 2s complement to emulate negative numbers for unsigned values simply would not work.

The second change was in how casts should behave: the idea was that inside of a cast, we would basically have only wrapping semantics to that particular size and it would not matter if this wrap is done late or early. Which is true for addition and subtraction, but not for division. For division cast(a / (x + y) as u32) is not the same as cast(a, u32) / (cast(x as u32) + cast(y as u32)). With limited integer types, the latter is the best we can implement, but this may run counter intuitive to what we would expect for cast or even an alternative wrap operator.

Although promising at the outset, these examples show that the model breaks down both with mixing integers and trying to replicate wrapping behaviour in a predictable way.

The next blog post will continue discussing solutions considered and their various advantages and drawbacks.