This is part of a series of posts, starting with Advent of Code 2019 in 130 ms: introduction.

## Day 20: Donut Maze

Day 20 is a maze problem similar to day 18. My solution runs in 11 ms and uses the same performance tricks. The only real difference is in the rules for generating new states, and that this solution doesn’t use custom duplication keys since the keys for the hash set of visited locations are already quite small.

Day 21 is a pure logic puzzle, and all you can really do for performance optimization is optimize the Intcode engine and minimize the springscript program. My solution runs in 8.8 ms.

For part 1 I use the program `(!A || !B || !C) && D` - think of it as “I must jump soon, and I can land if I jump now”. Using Boolean algebra, we can express this in fewer operations as `!(A && B && C) && D`, which works out to 5 springscript instructions.

For part 2 I use the program `(!A || !B || !C) && D && (E || H)`, with the rationale

• I must jump soon: `!A || !B || !C`
• I can land if I jump now: `D`
• After landing, I can take one step forward or immediately jump again: `E || H`

Again using a little Boolean algebra, we can express this as `!(A && B && C) && D && (E || H)`, which works out to 8 springscript instructions.

## Day 22: Slam Shuffle

Challenge 22 is a feast of modular arithmetic, and is fun because it has some of the season’s largest input numbers but shortest runtimes. My solution expresses the shuffle as a polynomial function, and runs in 450 μs. I believe most of these tricks, in some form, are necessary to solve part 2 at all.

Both parts are variants of the same challenge, but with different inputs: we’re deterministically shuffling a deck of cards, and need to identify which card ends up in a given position after the shuffle. We have three shuffle operations: `stack()`, `cut(n)` and `deal(n)`. `stack()` reverses the order; `cut(n)` moves the first `n` to the back, or the last `|n|` to the front if `n` is negative; and `deal(n)` move each card from index `i` to index `i * n`. Figure 13 shows an example.

I’ve also added `id()` as the “do nothing” shuffle, which leaves all cards where they are. This will become useful later.

Instead of thinking of where each card goes after the shuffle, it will be more useful to think of where each card came from before the shuffle. This way, we’ll represent any shuffle as a function `f(x)` which tells us for each index `x` which index `f(x)` of the original sequence we should place at index `x` in the new sequence. Represented this way, the elementary operations have the functions:

``````id(x)    = x           mod N   =     1  x + 0        mod N
stack(x) = N - x - 1   mod N   =   (-1) x + (N - 1)  mod N
cut(x)   = x + n       mod N   =     1  x + n        mod N
deal(x)  = x * inv(n)  mod N   = inv(n) x + 0        mod N
``````

`inv(n)` is the modular multiplicative inverse of `n` modulo `N`. Everything will be mod `N`, so I’ll let that be implicit from here on and usually won’t specify `mod N` explicitly.

It’s easy to see why `stack` and `cut` have the functions they do, but `deal` is a little less obvious. But if we think of it as `i * n`, as in the original description of `deal`, we see that we can get `i` back by dividing by `n`. In modular integer arithmetic, it’s not always possible to divide like you can with real or rational numbers, but if the modulus `N` is prime, you can always find a number `inv(n)` such that `n * inv(n) = 1 (mod N)`. So multiplying by `inv(n)` gets us back to the index we came from before a `deal(n)` shuffle.

The important feature of expressing the shuffle as these functions is that functions can be composed. In particular, you’ll notice that all four functions are first-degree polynomials. For these, we can work out compositions analytically:

``````f(x)       = ax + b
g(x)       = cx + d
(f ∘ g)(x) = f(g(x)) = a(cx + d) + b = (ca)x + (ad + b)
(g ∘ f)(x) = g(f(x)) = c(ax + b) + d = (ca)x + (cb + d)
``````

So the composition of two first-order polynomials is always also a first-order polynomial. This means that since we can express the basic shuffle operations as first-order polynomials, we can also express our whole shuffle routine as a single first-order polynomial. This composed polynomial `f(x)` will tell us for each index `x` after the shuffle, which index `f(x)` from before the shuffle was moved to `x`. All we need to do is start with the `id()` polynomial, also known as the identity function, and successively compose the polynomials for each shuffle operation.

In order to do that, though, there are still a couple of missing pieces. First, to create the polynomial for a `deal(n)` operation we need to compute the modular multiplicative inverse `inv(n)`. Fortunately the deck size, which is our modulus `N`, is not part of the puzzle input but defined as a specific number for both parts, and both numbers are prime. This means we can use Euler’s theorem (one of the many) to compute the inverse via modular exponentiation:

``````inv(n) = pow(n, N - 2)   (mod N)
``````

For modular exponentiation we can also turn to Wikipedia to acquire ready-to-go pseudocode.

Next, part 2 asks us to repeat our shuffle a gigantic number of times: 101,741,582,076,661. Even with all the above shortcuts, we can’t just naïvely compose our polynomial with itself that many times. Instead, we can use exponentiation by squaring: because function composition is associative, repeatedly composing a function with itself behaves much like exponentials: `f ∘ (f ∘ (f ∘ f)) = ((f ∘ f) ∘ f) ∘ f = (f ∘ f) ∘ (f ∘ f)`. This means that we can create “powers of two” of our function by first composing it with itself, then composing the composition with itself, then composing that composition with itself, etc. This way we can simplify repeated self-composition by breaking the exponent into a sum of powers of two, and composing the corresponding “powers of two” of our function. This means we only need to compute approximately `log2(101741582076661)` compositions.

Finally, for part 1 we need to find a position after the shuffle given a position before the shuffle - the opposite of what our polynomials compute. However, we can easily find this by inverting the polynomial:

``````f(x) = ax + b
inv(f)(x) = cx + d
x = (inv(f))(f(x)) = f(inv(f)(x)) = (ca) x + (cb + d)

c = inv(a)
d = -b * inv(a)

(inv(f))(x) = inv(a) * x + ((-b) * inv(a))
``````

This lets us easily go the other direction and compute, for an index `x` before the shuffle, the index `f(x)` it ends up after the shuffle.

All together, these tricks allow us to solve both parts in 450 μs.

## Day 23: Category Six

Day 23 is again mostly about running an Intcode program in a funny configuration. My solution runs in 2.8 ms and does nothing special for performance optimization.

Update 2020-08-27: See this post for a discussion of some Intcode optimizations which were very important for this solution.