[cap-talk] Memory allocation in embedded systems

David Hopwood david.nospam.hopwood at blueyonder.co.uk
Mon Jan 1 18:58:05 CST 2007

David Wagner wrote:
> Thanks for your comments on embedded systems.  They're helpful.
> Let me follow up on one point, for my own understanding:
> David Hopwood writes:
>>As it happens, the embedded system I'm working on at the moment (a printer)
>>uses linked structures within a statically allocated array.
> How does it help to allocate those structures from a statically allocated
> array, instead of from the heap?  Isn't that statically allocated array
> just a heap in disguise, making your code just malloc() in disguise?

A heap is a memory pool that is shared by all allocations within a process.
The statically allocated array is a memory pool, but it is only used for
objects of one type.

If C had a general-purpose pool allocator in its standard library, I might
have used that, but it doesn't. (The app is written mainly in the intersection
of C and C++, although the discussion below is language-independent. It uses
more than one memory pool, although only one contains linked structures.)

> Why would the resource exhaustion issues be significantly different for
> linked structures within a statically allocated array (which sounds
> to me like an open-coded malloc()), as compared to any other kind of
> dynamic allocation (e.g., an explicit call to malloc())?

Suppose that you have more than one type of object for which memory needs to
be allocated. What you want is for exhaustion of memory for each object type
to occur predictably *and independently*. If memory is taken from a common
pool/heap, then you cannot guarantee that there will always be a minimum number
of objects of any given type that can be allocated, which is what you usually
need in order to be able to say that the application meets the intended

There is an obvious objection to this argument, which is that given a fixed
total amount of memory, the *first* allocation failure of a system that takes
memory from multiple separate, statically sized pools, will occur before that
of a system that takes memory from a common pool. However, when the first
allocation failure occurs is not the point. In a well-written embedded or
safety-critical application (or in an OS kernel), you always have robust error
handling of any possible resource exhaustion situation. Failure of an allocation
attempt is not a problem as long as it occurs at a point where you can recover

What you don't want is for the system to be able to get into a state where
the number of objects of one type has exceeded the specification without being
reported, and then there is insufficient memory for another type of object even
though the number of instances of that type is within the spec. It's better to
fail the specific operation that caused a pool to overflow, than to fail some
possibly unrelated operation later.

As Jonathan points out, sometimes a memory pool is used as a cache from which
entries can be evicted whenever needed. In that case, exhausting the pool does
not affect correctness, only performance. (But such caches are rarer in embedded
systems than they are in operating systems, partly because cacheing optimizations
do not help with worst-case real-time performance.)

The obvious disadvantage of static allocation is that you have to predict and
limit in advance what the maximum number of objects will be for each pool
(or the amount of memory, but usually the objects in a given pool are all the
same size). Since the common perception of embedded systems is that they are
highly memory-constrained, this might seem like a serious problem, leading to
a large amount of memory being "wasted", and/or constraints on the number of
objects that would limit functionality.

Indeed it does lead to wasted memory, but:

 - In an embedded system there is only one high-level application running.
   The memory that is "wasted" would not be used by anything else.

 - The specification that you need to meet (and hence the number of objects
   required) is usually much more precisely defined that it would be for
   general-purpose software.

 - Memory is dirt-cheap. If you don't have enough memory to be able to write
   the application in a straightforward way, just get some more. (I am in the
   fortunate position of normally being able to dictate the hardware spec for
   the systems I design, as well as writing the software.)

As a result of the low cost of memory, most of the hairy memory-saving hacks
that used to be needed in embedded apps are now completely obsolete -- and IMHO
have been so for many years, although there are still plenty of embedded system
programmers who haven't adapted to this situation.

(A designer of one of the embedded controllers I use complains about memory
chip manufacturers discontinuing parts after only a few years, because they
are now too small for it to be economic to sell them. He has no choice but to
increase the memory available with each controller model; it's getting to the
point that I have more memory than I know what to do with even in the most
basic controllers.)

In the case of the printer app, it is running on essentially a desktop PC with
1 GB of RAM, which is mostly taken up by print data buffers -- so there is
essentially no memory pressure on anything else. This is very unusual for an
embedded system, but the conclusion above also holds for more typical systems
with much less memory.

There are some other reasons not to use malloc():

 - potential memory fragmentation, which is very difficult to reason about
   (impossible if you consider the allocator to be a black box).

 - if the memory usage of the app is constant, then the effect of that usage
   on the underlying platform is the same during testing as it will be for
   deployment, and there is no need to separately test this for the worst-case
   memory usage.

 - it is sometimes convenient to be able to iterate through all of the blocks
   allocated in a given pool; malloc doesn't allow this without maintaining
   an extra data structure that points to all the blocks.

 - the pointers returned by malloc() are nondeterministic. It can be a minor
   help to debugging to have a deterministic allocator.

 - potential bugs in the C standard library. You would think that libc
   implementations contain some of the most thoroughly-tested code in
   existence, but in my experience they still manage to have an unacceptably
   high incidence of bugs (and when you have the source, you can see just
   how badly written it is compared to your own code).

> I'm sure you can specialize the behavior of your dynamic allocator for
> this particular use, since you know it will only be used in this one way,
> but it's still a dynamic allocator, so you still need to prove that
> no dynamic allocation attempt will ever exceed the amount of storage
> available in that statically allocated array.

It's not always necessary to prove that an allocation will not fail, as long
as the behaviour if it does fail is reasonable and meets the specification.
In the printer app, there is only one point at which an allocation attempt
may fail, and that will cause a print job to be rejected, with an error
reported to the client that tried to submit it.

David Hopwood <david.nospam.hopwood at blueyonder.co.uk>

More information about the cap-talk mailing list