I ended up going down the rabbit hole re-implementing Array.Sort with AVX2 intrinsics, and there’s no reason I should store all the agony inside (to be honest: I had a lot of fun with this). I should probably attempt to have a serious discussion with CoreCLR / CoreFX people about starting a slow process that would end with integrating this code into the main C# repos, but for now, let’s get in the ring and show what AVX/AVX2 intrinsics can really do for a non-trivial problem, and even discuss potential improvements that future CoreCLR versions could bring to the table.

Since there’s a lot to go over here, I’ll split it up into a few parts:

  1. In part 1, we did a short refresher on QuickSort and how it compares to Array.Sort. If you don’t need any refresher, you can skip over it and get right down to part 2 and onwards , although I really recommend skimming through, mostly because I’ve got really good visualizations for that should be in the back of everyone’s mind as we’ll be dealing with vectorization & optimization later.
  2. In part 2, we go over the basics of Vectorized HW Intrinsics, discussed vector types, and a handful of vectorized instructions we’ll actually be using in part 3, but we still will not be sorting anything.
  3. In this part, we go through the initial code for the vectorized sorting and we’ll finally start seeing some payoff. We’ll finish with some agony courtesy of CPU’s Branch Predictor, just so we don’t get too cocky.
  4. In part 4, we go over a handful of optimization approaches that I attempted trying to get the vectorized partition to run faster, we’ll see what worked and what didn’t.
  5. In part 5, we’ll see how we can almost get rid of 100% of the remaining scalar code, by implementing small-constant size array sorting. We’ll use, drum roll…, yet more AVX2 vectorization and gain a considerable amount of performance / efficiency in the process.
  6. Finally, in part 6, I’ll list the outstanding stuff / ideas I have for getting more juice and functionality out of my vectorized code.

Vectorized Partitioning + QuickSort

It’s time we mash all the new knowledge we picked up in the last post about SIMD registers and instructions and do something useful with them. Here’s the plan:

AVX2 Partitioning Block

Let’s start with this “simple” block:

var P = Vector256.Create(pivot); 
...
var current = Avx2.LoadDquVector256(nextPtr);
var mask = (uint) Avx.MoveMask(
    Avx2.CompareGreaterThan(current, P).AsSingle()));
current = Avx2.PermuteVar8x32(current,
    LoadDquVector256(PermTablePtr + mask * 8));
Avx.Store(writeLeft, current);
Avx.Store(writeRight, current);
var popCount = PopCnt.PopCount(mask);
writeRight -= popCount;
writeLeft  += 8 - popCount;

That’s a lot of cheese, let’s break this down:

And we’re done!

This was a full 8-element wise partitioning block, and it’s worth noting a thing or two about it:

I thought it would be nice to show off that the JIT is well behaved in this case with the generated x64 asm:

vmovd xmm1,r15d                      ; Broadcast
vbroadcastd ymm1,xmm1                ; pivot
...
vlddqu ymm0, ymmword ptr [rax]       ; load 8 elements
vpcmpgtd ymm2, ymm0, ymm1            ; compare
vmovmskps ecx, ymm2                  ; movemask into scalar reg
mov r9d, ecx                         ; copy to r9
shl r9d, 0x3                         ; *= 8
vlddqu ymm2, qword ptr [rdx+r9d*4]   ; load permutation
vpermd ymm0, ymm2, ymm0              ; permute
vmovdqu ymmword ptr [r12], ymm0      ; store left
vmovdqu ymmword ptr [r8], ymm0       ; store right
popcnt ecx, ecx                      ; popcnt
shl ecx, 0x2                         ; pointer
mov r9d, ecx                         ; arithmetic
neg r9d                              ; for += 8 - popCount
add r9d, 0x20                        ;
add r12, r9                          ; Update writeLeft pos
sub r8, rcx                          ; Update writeRight pos

Anyone who’s followed the C# code can use the intrinsics table from the previous post and really read the assembly code without further help. If that’s not a sign that the JIT is literally taking our intrinsics straight to the CPU as-is, I don’t know what is!

Permutation lookup table

The permutation lookup table is the elephant in the room at this stage, so let’s see what’s in it:

Here are 4 sample values from the generated permutation table that I’ve copy-pasted so we can get a feel for it:

static ReadOnlySpan<int> PermTable => new[] {
    0, 1, 2, 3, 4, 5, 6, 7,     // 0   => 0b00000000
    // ...
    3, 4, 5, 6, 7, 0, 1, 2,     // 7   => 0b00000111
    // ...
    0, 2, 4, 6, 1, 3, 5, 7,     // 170 => 0b10101010
    // ...
    0, 1, 2, 3, 4, 5, 6, 7,     // 255 => 0b11111111
};

If you look at the actual code, you’ll see that the values inside the permutation table are coded as a ReadOnlySpan<byte>. This is a CoreCLR / C# 7.3 specific optimization that allows us to treat the address of this table as a constant at JIT time. Kevin Jones (@vcsjones) did a wonderful job of digging into it, go read his excellent post about this.

It’s important to note that this must to be a ReadOnlySpan<byte> for the optimization to trigger (that was two nights of my life chasing what I was sure had to be a GC / JIT bug). Now, normally, it would really be a bad idea to store a ReadOnlySpan<int> as a ReadOnlySpan<byte> since that forces us to “choose” between little/big-endian encoding at compile-time, and in C# we have to assume our code might run both on little/big endian machines where our actual CPU might not use the same encoding as we compiled with. Not fun! In this case, luckily, this is a non-issue, as the entire premise is Intel specific, and we can simply assume little endianness here till the end of all times.

We’ve covered the basic layout of the permutation table. We’ll go back to it once we start optimization efforts in full, but let us finish the vectorized partition first.

Outer-Loop & Function

We now have a short partitioning block at hand, but there’s a major complication: In-place sorting. In-place sorting brings a new challenge to the table: While our block partitions 8-elements cleanly and quickly, the partitioned data inside the SIMD register contains both values smaller and larger than the pivot. Each “portion” of that register needs to end up on different ends of the array… that’s kind of the whole idea with partitioning.

As shown previously, when we toured the vectorized partitioning block, it ends with writing the partitioned data into both sides of the array (remember writeLeft & writeRight). I fondly named this approach in my code as a “double-pumped” partitioning (and named the whole thing AVX2DoublePumped<T>.QuickSort), as it pumps values into both ends of the array. This is a tricky approach since we read some data, but can’t write it back to the same address… that admittedly does not sound very in-place-y… But we’ll soon dissect it to bits and see how/why this works.

This is the initial approach I got working that I was not ashamed at. I’ve left it pretty much intact in the repo under the name AVX2DoublePumpedNaive, in case you want to dig through the full code. This approach, like all good things in life comes in 3-parts: Setup, a double-pumped loop and handling the remainder (When too small for SIMD registers), so let’s start:

Setup: Make some room!

What I eventually opted for was to read from one area and write to another area in the same array. But we need to make some spare room inside the array for this. How?

We cheat! (¯\(ツ)/¯), but not really: we allocate some temporary space, using stackalloc in C#, here’s why this isn’t really cheating:

Now that we have some temporary memory, we simply read ahead 1 x 8-element vector from each side, and use our good-ole’ partitioning block to partition straight into the temporary memory. Having done that, we don’t care about the contents of the area we just read from in original the array anymore, as it’s partitioned away in the temporary memory, and we’re free to write up to one SIMD vector to each end of the array. This means we’ve made enough room inside our array available for writing in-place while partitioning: we finish the setup by initializing read and write pointers for every side (readLeft, readRight, writeLeft, writeRight); An alternative way to think about them is that each side gets its own head (read) and tail (write) pointers. We will end up reading from one of the heads and write to both tails later on.

Here’s the signature + setup code:

static unsafe int* VectorizedPartitionInPlace(int* left, int* right, int *tmp)
{
    var pivot = *right;
    var readLeft = left;
    var readRight = right - 1;
    var writeLeft = readLeft;
    var writeRight = readRight - 8;

    var tmpLeft = tmp;
    var tmpRight = tmpLeft + TMP_SIZE_IN_ELEMENTS - 8;

    // Broadcast the selected pivot
    var P = Vector256.Create(pivot);
    var pBase = IntPermTablePtr;

    // Read ahead from left+right
    var LT0 = LoadDquVector256(readLeft + 0*8);
    var RT0 = LoadDquVector256(readRight - 1*8);

    var rtMask = (uint) MoveMask(CompareGreaterThan(LT0, P).AsSingle());
    var ltMask = (uint) MoveMask(CompareGreaterThan(RT0, P).AsSingle());

    var ltPopCount = PopCount(rtMask);
    var rtPopCount = PopCount(ltMask);

    LT0 = PermuteVar8x32(LT0, GetIntPermutation(pBase, rtMask));
    RT0 = PermuteVar8x32(RT0, GetIntPermutation(pBase, ltMask));

    Avx.Store(tmpRight, LT0);
    tmpRight -= ltPopCount;
    ltPopCount = 8 - ltPopCount;
    Avx.Store(tmpRight, RT0);
    tmpRight -= rtPopCount;
    rtPopCount = 8 - rtPopCount;
    tmpRight += 8;

    Avx.Store(tmpLeft, LT0);
    tmpLeft += ltPopCount;
    Avx.Store(tmpLeft, RT0);
    tmpLeft += rtPopCount;

    // Adjust for the reading that was made above
    readLeft  += 1*8;
    readRight -= 2*8;    
    // ... Rest of the code follows in the next paragraph

I’ve cut out the comments, but it’s all available with much more detail and context in the repo, there’s not a lot going on here for now: The function accepts parameters (left,right,tmp), we already expect the pivot to be selected and to have right point to it (We’ll cover that in a bit), and all that you’re seeing here are 2 partition blocks going on at the same time: partitioning a single vector from the left side, and a single vector from the right.

The setup fragment ends with readLeft being advanced by a Vector256<int> size , and readRight being decremented by the size of 2 Vector256<int>. This might seem peculiar at first, but don’t forget that when we read/write using Avx2.LoadDquVector256/Avx.Store we always have to supply the start address to read from or write to! There is no ability to read/write to the “left” of the pointer, so this isn’t anything more than accounting for that.

Double Pumped Loop

Here’s a visual aid for how I ended up doing this; note the different color codes and legend I’ve provided here, and try to watch a few loops of this noticing the various color transitions, this will become useful for parsing the text below:

Here’s the same loop in C#:

    while (readRight >= readLeft) {
		int *nextPtr;
        if ((readLeft   - writeLeft) <=
            (writeRight - readRight)) {
            nextPtr = readLeft;
            readLeft += 8;
        } else {
            nextPtr = readRight;
            readRight -= 8;
        }

        var current = LoadDquVector256(nextPtr);
        var mask = (uint) MoveMask(CompareGreaterThan(current, P).AsSingle());
        current = PermuteVar8x32(current, GetIntPermutation(pBase, mask));

        Store(writeLeft, current);
        Store(writeRight, current);

        var popCount = PopCount(mask);
        writeRight -= popCount;
        writeLeft += 8U - popCount;
    }

Most of the loop body is the partition block we’ve already been through before. The only novelty here is the rather complex condition with which we select what side to read from next:

        if ((readLeft   - writeLeft) <=
            (writeRight - readRight)) {
            nextPtr = readLeft;
            readLeft += 8;
        } else {
            nextPtr = readRight;
            readRight -= 8;
        }

This condition does in code what we described with animation/words before: it calculates the distance between each head and tail on each side and compares them to determine which side has less space left, or which side closer to being overwritten, since the next read will happen from that sie, we’ve just added 8 more integers worth of writing space to that same side, thereby eliminating the risk of overwriting…
While it might be easy to read in terms of correctness or motivation, this is a very sad line of code, as it will haunt us in the next post!

Handling the remainder and finishing up

Finally, we come out of the loop once we have less than 8-elements to partition. We obviously can’t use vectorized code here, so we just do plain old scalar partitioning:

Once we’re done with this little trailing scalar partitioning, we simply need to copy back our already partitioned data from the temporary area back into the array to the area left between writeLeft and writeRight, it’s a quick 96 byte copy operation and we are finally done with partitioning, we just need to move the pivot back to the newly calculated pivot position and report this position back as the return value for this to be officially be christened as AVX2 partitioning function!

Here’s the final piece of this function:

    var boundary = writeLeft;

    // We're scalar from now, so
    // correct the right read pointer back
    readRight += 8;

    while (readLeft < readRight) {
        var v = *readLeft++;

        if (v <= pivot) {
            *tmpLeft++ = v;
        } else {
            *--tmpRight = v;
        }
    }

    var leftTmpSize = (int) (tmpLeft - tmp);
    new ReadOnlySpan<int>(tmp, leftTmpSize).
        CopyTo(new Span<int>(boundary, leftTmpSize));
    boundary += leftTmpSize;
    var rightTmpSize = (int) (tmp + TMP_SIZE_IN_ELEMENTS - tmpRight);
    new ReadOnlySpan<int>(tmpRight, rightTmpSize).
        CopyTo(new Span<int>(boundary, rightTmpSize));

    // Shove to pivot back to the boundary
    Swap(boundary++, right);
    return boundary;
}

From the top

Now that we have a vectorized partitioning function, we’re just missing the top-level code that does temporary stack allocation, pivot selection and recursion. We’ve covered the scalar variant of this in the first post, but let’s look at our real/final function. This is pretty much copy-pasted with minor adaptations from the CoreCLR code that does the same for obvious reasons:

public static partial class AVX2DoublePumpedNaive<T> 
  where T : unmanaged, IComparable<T>
{
    const int SLACK_PER_SIDE_IN_VECTORS  = 1;
    const int SLACK_PER_SIDE_IN_ELEMENTS = SLACK_PER_SIDE_IN_VECTORS * 8;
    const int TMP_SIZE_IN_ELEMENTS       = 2 * SLACK_PER_SIDE_IN_ELEMENTS + 8;

    public static unsafe void QuickSort(T[] array)
    {
        fixed (T* p = &array[0]) {
            if (typeof(T) == typeof(int)) {
                // We use this much space for making some room inside
                // the array + partitioning the remainder
                var tmp = stackalloc int[TMP_SIZE_IN_ELEMENTS];

                var pInt = (int*) p;
                QuickSortInt(pInt, pInt, pInt + array.Length - 1, tmp);
            }
        }
    }
    //...
}

This is the main entry point, where we special case using generic type elision and simply call our signed integer version QuickSortInt after allocating the temporary memory. This is a good time as any to mention that for the time being, I only implemented vectorized quick-sorting when T is int. To fully replace Array.Sort() many more tweaked versions of this code will have to be written to eventually support unsigned integers, both larger and smaller than 32 bits and floating-point types.

Back to the existing code, though: once we know for sure that T is an int, we go into QuickSortInt:

static unsafe void QuickSortInt(int* start, int* left, int* right, int *tmp)
{
    var length = (int) (right - left + 1);

    int* mid;
    switch (length) {
        case 0:
        case 1:
            return;
        case 2:
            SwapIfGreater(left, right);
            return;
        case 3:
            mid = right - 1;
            SwapIfGreater(left, mid);
            SwapIfGreater(left, right);
            SwapIfGreater(mid, right);
            return;
    }

    // Go to insertion sort below this threshold
    if (length <= 16) {
        InsertionSort(left, right);
        return;
    }

    // Compute median-of-three, of:
    // the first, mid and one before last elements
    mid = left + ((right - left) / 2);
    SwapIfGreater(left, mid);
    SwapIfGreater(left, right - 1);
    SwapIfGreater(mid, right - 1);
	// Pivot is mid, place it in the right hand side
    Swap(mid, right);

    var sep = VectorizedPartitionInPlace(left, right, tmp);

    QuickSortInt(start,  left, sep - 1, tmp);
    QuickSortInt(start, sep, right, tmp);
}

This is the part I blatantly copied for ArraySortHelper<T>. What it does is:

Initial Performance

Are we fast yet?

Yes! This is by no means the end, on the contrary, this is only. We now have something working, and it is even not too shabby, if I may say so:

BenchmarkDotNet=v0.11.5, OS=clear-linux-os 30850
Intel Core i7-7700HQ CPU 2.80GHz (Kaby Lake), 1 CPU, 4 logical and 4 physical cores
.NET Core SDK=3.0.100-rc1-014015
  [Host]     : .NET Core 3.0.0-rc1-19425-03 (CoreCLR 4.700.19.42204, CoreFX 4.700.19.42010), 64bit RyuJIT
  Job-PDGVYD : .NET Core 3.0.0-rc1-19425-03 (CoreCLR 4.700.19.42204, CoreFX 4.700.19.42010), 64bit RyuJIT

InvocationCount=10  IterationCount=3  LaunchCount=1
UnrollFactor=1  WarmupCount=3

We’re off to a very good start: We can see that as soon as we hit 1000 element arrays (even earlier, in earnest) we already outperform Array.Sort (87% runtime), and by the time we get to 1M / 10M element arrays, we are actually speedups north of 2.5x (39%, 37% runtime) over the scalar C++ code!
You can clearly see that while Array.Sort is behaving like we would expect from a QuickSort function: it is slowing down at rate you’d expect given that it has a Big-O notation of , our AVX2DoublePumpedNaive is slightly peculiar: The time spent sorting every single element initially goes exponentially up as the array increases in size, then goes down a bit and back up. Huh? It actually improves as we sort more data? Quite unreasonable, unless…
Remember that we have scalar insertion sort and vectorized code mixed together. Where are we actually spending more CPU cycles though?
It’s time we profile the code to see what’s really up: We can fire up the venerable Linux perf tool, through a simple test binary/project I’ve coded up which allows me to execute some dummy sorting by selecting the sort method I want to invoke and specify some parameters for it through the command line, for example:

$ cd ~/projects/public/QuickSortAvaganza/Example
$ dotnet publish -c release -o linux-x64 -r linux-x64
# Run AVX2DoublePumped with 1,000,000 elements x 100 times
$ ./linux-x64/Example --type AVX2DoublePumpedNaive --sizes 1000000

Will call the AVX2DoublePumpedNaive implementation we’ve been discussing from the beginning of this post with 1M elements, and re-sort the same random data 100 times to generate some heat in case global warming isn’t not cutting it for you.
I know that calling dotnet publish ... seems superfluous, but just trust1 me and go with me here on this one:

# Make the JIT speak to 'perf'
$ export COMPlus_PerfMapEnabled=1 
# Record some performance information, namely the 'instructions' HW counter,
# For 1M elements, 100 invocations
$ perf record -F max -e instructions ./Example \
       --type AVX2DoublePumpedNaive --sizes 1000000
info: Using a maximum frequency rate of 100,000 Hz
[ perf record: Woken up 45 times to write data ]
[ perf record: Captured and wrote 11.098 MB perf.data (290031 samples) ]
$ perf report --stdio -F overhead,sym | head -15
...
# Overhead  Symbol
    65.66%  [.] ... ::VectorizedPartitionInPlace(int32*,int32*,int32*)[Optimized]
    22.43%  [.] ... ::InsertionSort(!!0*,!!0*)[Optimized]
     5.43%  [.] ... ::QuickSortInt(int32*,int32*,int32*,int32*)[OptimizedTier1]
     4.00%  [.] ... ::Memmove(uint8&,uint8&,uint64)[OptimizedTier1]

# Same shtick, 10K elements, 10K invocations
$ perf record -F max -e instructions ./Example \
       --type AVX2DoublePumpedNaive --sizes 10000
info: Using a maximum frequency rate of 100,000 Hz
[ perf record: Woken up 38 times to write data ]
[ perf record: Captured and wrote 9.549 MB perf.data (250052 samples) ]
$ perf report --stdio -F overhead,sym | head -15
...
# Overhead  Symbol
    54.59%  [.] ... ::VectorizedPartitionInPlace(int32*,int32*,int32*)[Optimized]
    29.87%  [.] ... ::InsertionSort(!!0*,!!0*)[Optimized]
     7.02%  [.] ... ::QuickSortInt(int32*,int32*,int32*,int32*)[OptimizedTier1]
     5.23%  [.] ... ::Memmove(uint8&,uint8&,uint64)[OptimizedTier1]

This is a trimmed summary of recording performance metrics, specifically, number of instructions executed for running a 1M element sort 100 times, followed by running a 10K element sort, 10K times. I hope that while you’re shocked as I was when I saw this for the first time, you’re also starting to understand the previous oddities we saw with the Time /N column!
We’re spending upwards of 20% of the our time doing scalar insertion sorting! This was supposed to be vectorized sorting and yet, somehow, “only” 65% of the time is spent in the vectorized function (which also has some scalar parts, to be frank). Not only that, but as the size of the array decreases, the percentage of time spent in scalar code increases (from 22.43% to 29.87%), which should not surprise us anymore.
Before anything else, let me clearly state that this is not necessarily a bad thing! As the size of the partition decreases, the benefit of doing vectorized partitioning decreases in general, and even more so for our AVX2 partitioning which has non-trivial start-up overhead. We shouldn’t care about the amount of time we’re spending on scalar code, but the amount of time taken to sort the array.
The decision to go to scalar insertion sort or stick to our vectorized code is controlled by the threshold I mentioned before, which is sitting there at 16+1. We’re only beginning our optimization phase in the next post, so for now, we’ll stick for now with the threshold selected for Array.Sort by the CoreCLR developers, but this is definitely something we will tweak later for our mix of functions/implementations.

If you recall, on the first post in this series, we collected some statistics about is going on inside our sort routine. This is a perfect time to bring those statistics back and take them up a notch by adding vectorized operation counters:

A few notes on the these statistics, when comparing both sets as presented in these tabs:

This is but the beginning of our profiling journey, but we are already learning a complicated truth: Right now, as fast as this is already going, the scalar code we’ve used for insertion sort will always put an upper limit on how fast we can possibly go by optimizing the vectorized code we’ve gone over so far, unless we get rid of InsertionSort all together it and replace it with something better. But first thing’s first, we must remain focused: 65% of instructions executed are still spent doing vectorized partitioning; That is the biggest target on our scope!

Finishing off with a sour taste

There one last thing I want to end this post with, and it’s not an easy pill to swallow: let’s re-run perf and see how the code behaving in terms of branch mis-prediction. What I’ll do is use very useful Linux utility called cset which can be used to evacuate all user threads and (almost all) kernel threads from a given CPU core, using cpusets:

$ sudo cset shield --cpu 3 -k on
cset: --> activating shielding:
cset: moving 638 tasks from root into system cpuset...
[==================================================]%
cset: kthread shield activated, moving 56 tasks into system cpuset...
[==================================================]%
cset: **> 38 tasks are not movable, impossible to move
cset: "system" cpuset of CPUSPEC(0-2) with 667 tasks running
cset: "user" cpuset of CPUSPEC(3) with 0 tasks running

Once we have a shielded CPU core, we execute our Example binary on that CPU core while collecting various top-level hardware statistics using a different perf command line:

$ perf stat -a --topdown sudo cset shield -e ./Example AVX2DoublePumped 1000000 100
cset: --> last message, executed args into cpuset "/user", new pid is: 16107

 Performance counter stats for 'system wide':
        retiring      bad speculation       frontend bound        backend bound
...
...
...
S0-C3 1    37.6%                32.3%                16.9%                13.2%

       3.221968791 seconds time elapsed

I’m purposely showing only the statistics collected for our shielded core since we know whatever collected there is the only stuff we care about in the first place.

Well, here are some bad news: core #3 is really not having a good time running our code with 32.3% of the branches taken being mis-speculated. This might not be immediately apparent if you haven’t done this sort of thing before (in which case, read the info box below), but this is really bad. The penalty for each bad speculation is an entire flush of the pipeline, which costs us around 14-15 cycles on modern Intel CPUs.


We have to remember that efficient execution on modern CPUs means keeping the CPU pipeline as busy as possible; This is quite a challenge given its length is about 15 stages, and the CPU itself is super-scalar (e.g., it can execute up to 3-4 instructions each cycle!). If, for example, all instructions in the CPU have a constant latency in cycles, this means it has to start decoding 30+ instructions into “the future” while it’s just finishing up with a current one to avoid doing nothing. That’s enough of a challenge for regular code, but what should it do when it sees a branch? It could attempt and execute both branches, which quickly becomes a fool’s errand if somewhere close-by there would be even more branches. What CPU designers did was opt for speculative execution: add complex machinery to predict if a branch will be taken. But when the predictor mis-fires, we end up paying a huge penalty for unwinding all of the instructions that began execution in a mis-predicted branch.

Wait, I hear your optimistic thoughts in my head too… maybe it’s not our vectorized so-called branch-less code? Maybe we can chalk it all up on that mean scalar InsertionSort function doing those millions and millions of scalar comparisons? We are, after all, using it for sorting small partitions, which we’ve already measured at more than 20% of the total run-time? Let’s see this again with perf, this time focusing on the branch-misses HW counter and try to figure out how the mis-predictions are spread within our call-stacks:

$ export COMPlus_PerfMapEnabled=1 # Make perf speak to the JIT
# Record some performance information:
$ perf record -F max -e branch-misses ./Example AVX2DoublePumped 1000000 100
info: Using a maximum frequency rate of 100,000 Hz
[ perf record: Woken up 45 times to write data ]
[ perf record: Captured and wrote 11.098 MB perf.data (290031 samples) ]
$ perf report --stdio -F overhead,sym | head -17
...
    40.97%  [.] ...::InsertionSort(!!0*,!!0*)[Optimized]
    32.30%  [.] ...::VectorizedPartitionInPlace(int32*,int32*,int32*)[Optimized]
     9.64%  [.] ...::Memmove(uint8&,uint8&,uint64)[OptimizedTier1]
     9.64%  [.] ...::QuickSortInt(int32*,int32*,int32*,int32*)[OptimizedTier1]
     5.62%  [.] ...::VectorizedPartitionOnStack(int32*,int32*,int32*)[Optimized]
...

No such luck. While InsertionSort is definitely starring here with 41% of the bad speculation events, we still have 32% of the bad speculation coming from our own new vectorized code. This means that our vectorized code still contains a lot of data-dependent branches. The resulting pipeline flush is a large penalty to pay given that our entire 8-element partition block has a throughput of around 6-7 cycles. That means we are hitting that 15 cycle wall too often to feel good about ourselves.

I’ll finish here with that. We have a lot of work cut out for us. This is no-where near over.
In the next post, I’ll try to give the current vectorized code a good shakeup. After all, it’s still our biggest target in terms of number of instructions executed, and 2nd when it comes to branch mis-predictions. Once we finish squeezing that lemon for all its performance juice on the 4th post, We will turn our focus to the InsertionSort function on the 5th post , and we’ll see if we can appease the performance gods to make it faster.
In the meantime, you can try and go back to the vectorized partitioning function and try to figure out what is causing all those awful branch mis-predictions if you’re up for a small challenge. We’ll be dealing with it at the end of the next post.


  1. For some, perf wasn’t in the mood to show me function names without calling dotnet publish and using the resulting binary, and I didn’t care enough to investigate further…