After a very hectic time full of kernel patches and moving boxes [1], I sent the first version of memblock simulator to linux-mm. Now, it’s a good time to explain what is this all about and why such thing is needed in the first place.

What?

We could say that “simulator” is a fancy word to describe a test suite that uses the actual memblock code. This program runs in user space (i.e., outside the kernel), which causes a problem in itself. Memblock uses a bunch of kernel definitions, which are unavailable here. Compilation results in >100 errors and many more warnings. Still, we have to create an illusion that all the structures and functions are present. This means one thing – work out all dependencies, stub required definitions and make the compiler happy. And that’s what I did.

After getting the memblock running, I was able to move on to the test cases. If you take a look at the patches, you can see it’s a series of unit tests exercising different memblock functions. Define a region (or more), try to add/reserve/remove/free it and check if different memblock data structures get updated to expected values. It’s quite simple. At least for now.

Why?

Like I mentioned before, memblock is quite a strange beast. It performs memory management before the actual memory allocators are initialized, which is very early in the booting process. This makes testing and debugging it difficult. There were a couple of regressions that happened in the past [2][3], and maybe they could be avoided if there was an automated way of testing memblock-related changes. For now, my project makes sure that the basic memblock API behaves as expected.

The future

The next thing I plan to work on is adding test coverage for memblock_alloc_* and memblock_phys_alloc_* functions. They are responsible for finding a suitable memory region that can be used for allocation. Testing these will need some prep work, because we wish to work on real, valid memory ranges. Why is that, you may ask.

In its basic form, memblock can store 128 entries of available and reserved memory regions. There are a couple of cases when this is not enough and resizing either of the arrays is required. We don’t need to look far to find an example - on x86, UEFI can return a memory map that has more regions than what memblock can support[4]. So, what would happen if we were to test this use case as it is now? We could do something like this:

  • Allow array resizing (call memblock_allow_resize())
  • Register some memory as available (call memblock_add(...))
  • Try to add/reserve INIT_MEMBLOCK_REGIONS + 1.

The last region addition would trigger the array resize. This function, memblock_double_array, looks for a free spot based on what was added to memblock.memory and memblock.reserved. Now, the question is – what memory block did we register as available? What is the base address? 0x0? 0xaabbcc? Either way, we can be certain it’s not a valid address for this program. Even if memblock_double_array finds space for the resized array within this range, it segfaults on memcpy called here.

So, for now, the solution is to use valid memory ranges in memblock_add() returned by malloc. It’s to be seen if this method will work for testing the allocation functions. If it does, you’ll see an appropriate patchset in a month or so.

The far future

In the big picture, the memblock simulator should be able to not only test its features one by one, but use them all together. For example, we could pass (or generate) a physical memory layout to the simulator[5], perform usual memblock tasks, and simulate releasing the memory to the buddy page allocator. Such a test would check if the final memory map was correctly initialized.

Still, it’ll take a lot of time to get there. Unfortunately, the 5 weeks I have won’t be enough to implement all of this.