Select a category to the left.



Light or dark? Choose how the site looks to you by clicking an image below.

Light Dark


Preferred Language

All content on is originally in UK English. However, if content exists in your preferred language, it will display as that instead. Feel free to choose that below. This will require a page refresh to take effect.


"" details

Domain Name:
Site Version: 3.0.1
Last Updated: 2019/08/18
Talk #5 - Graph Theory Applications in Video Games
Description: A detailed look at Graph Theory and how it can be applied to video games.
Slogan: There's more to this than you think...
Presenters: Clara Nguyễn
Type: Class Presentation
Date Originally Presented: 2020/03/11
Downloads: [ Slides (Formal) ] [ Slides (Original) ]
Talk Summary & Table of Contents
Graph Theory isn't just a topic that is restricted to being "theory". There are several real-life applications of it. One of those just happens to be video games. In this talk, I go over a brief overview of several concepts where Graph Theory is applied. The topics covered vary a bit:
  • The "rules" of graphs in graphics - Vertices in 3D space, how they are represented, and the "rules" of dealing with vertices and edges.

  • Race Games & Lap Counting - Racing games where circuits are used to count laps, as well as a mention of Mario Kart Wii and its Ultra-Shortcuts, to encourage the simplicity of Hamiltonian Circuit checking for lap counting.

  • Maze Generation w/ Disjoint-Sets & Union-Find - I talk about how to generate mazes efficiently, as well as taking it a step up. Generation of multiple connected mazes along with multi-floor/dimensional mazes are described. The true power of the Disjoint-Set data structure with optimisations is also discussed in depth.

  • bitDP, a theoretical approach - As a bonus, a theoretical "Part 2" to my bitDP talk is included too. I talk about the theory behind it and how it can be applied to a game to optimally guarantee that a Hamiltonian Path exists in a generated graph.

  • Entombed (1982 Atari 2600 game) - For an added bonus, I also discuss an unsolved problem involving an Atari 2600 game, Entombed.

Just a heads up, this talk is 60-75 minutes long. Concepts will be discussed in great depth. However, the talk was intentionally constructed for me to be over-prepared, with topics that can be skipped if I do not have time to cover everything. As such, I link to this page in the slides if anyone is interested in the full talk. Audience interaction is welcome mid-talk. In the event that I skip topics, the questions below are guaranteed to be covered while I'm talking.
As a requirement, we had to put questions in for the audience to answer on paper mid-talk. The answers were hidden in the slides to see if students were paying attention. Here are mine, with answers at the end of the page:
  1. Given a 3D model M of n vertices, how many triangles are drawn if done via Triangle List?
  2. What is lg*(2265536)? Alternatively, what is the lg*(62)?
  3. What does bitDP stand for?
The "rules" of graphs in graphics
Most graphs discussed in our Graph Theory are abstract and may be visualised in any way as long as it follows a set of rules. Likewise, in computer graphics, there are "rules" too. However, they are quite different from the theoretical graph theory we are used to discussing. How a vertex and an edge are percived are different and absolute.

Here's a few rules:
  • Vertices have positional coordinates (x, y, z) to define position in space. Consequently, this means that there is only one way to represent them in space.

  • Edges (connections between vertices) are implied, based on a drawing mode of the programmer's choice. They all have their own advantages and disadvantages. I'll go over them below.

  • Everything is oriented around triangles. Period. If you wanted to make a square, you do it with two triangles. If you wanted to make a curved surface, you have to estimate it with triangles.

Coordinate Systems

We have three dimensions to work with here. The coordinates are cartesian, and denoted with (x, y, z). Depending on implementation, you may find the "y" axis being the "height" axis, or the "z" axis being the "height" axis. For simplicity of this talk, any potential discussion of 3D coordinates will assume "z" as the "height" axis. This is shown as follows:

Z-axis being shown as the "height" axis.

Back-Face Culling

When we draw a triangle, it's filled, obviously. However, only one side of the triangle is drawn. The other side is invisible. This is due to Back-Face Culling. The side that gets drawn depends on the order the vertices are facing when you draw to the screen. If they are clockwise, then the triangle will face the camera, thus drawing the triangle to the screen. If it is counter-clockwise, then the triangle will face away from the camera, making it invisible.

The two figures below show this quite elegantly. Pay attention to the arrow that is circling around, as well as how it relates with V(M) = { v1, v2, v3 }.

Clockwise drawing of V(M) = { v1, v2, v3 }.

Counter-clockwise drawing of V(M) = { v1, v2, v3 }.

Edge Implying: Drawing Modes

In 3D drawing libraries (e.g. OpenGL), the programmer is given many ways to transform a list of vertices into triangles. All of these ways are for implying where edges will exist while rendering the scene. I won't go over all of the modes, only the ones that I find are the most intuitive to learn quickly.

For all of these, once again, assume that the list of vertices is represented as V(m) where n is the size of the list.
  • Triangle List: Every 3 vertices in a list will form a triangle. Consequently, this means the V(m) must contain a multiple of 3 vertices in it. n / 3 triangles will be drawn.

    V(m) = { v1, v2, v3, v4, v5, v6 } drawn via Triangle List

  • Triangle Strip: The first 3 vertices will form a triangle. After that, any additional vertex will form a triangle with the previous 2 vertices that come in the list. This means n ≥ 3, and n - 2 triangles will be drawn.

    First 3 vertices drawn via Triangle Strip.
    V(M) = { v1, v2, v3 }.

    Additional vertex added and drawn via Triangle Strip.
    V(M) = { v1, v2, v3, v4 }.

  • Triangle Fan: Everything pivots off of the first vertex in a "fan" shape. The first 3 vertices will form a triangle. Then every additional vertex will create a triangle based on the previous vertex and the first one. Like Triangle Strip, this means n ≥ 3, and n - 2 triangles will be drawn.

    First 3 vertices drawn via Triangle Fan.
    V(M) = { v1, v2, v3 }.

    Step 2. Additional vertex.
    V(M) = { v1, v2, v3, v4 }.

    Step 3. Additional vertex.
    V(M) = { v1, v2, v3, v4, v5 }.

Now that we are clear on the rules of edge implying, we can get to other things...
Race Games & Lap Counting
Lap counting is a pinnacle element in racing games. There are actually a few things in a racing game that we need to keep track of...
  • Player Lap Count
  • Player Position
  • Distance between players
How about a solution that revolves around all three of those? Sure, let's give it a shot. For starters, for the entirety of this section, let's assume the following (extremely simple) racetrack:

An extremely simple racetrack...

The Basics

The easiest solution to making tracks work with lap detection is to use a "checkpoint" system. As such, let's divide the track up into several checkpoints. Thus, the player must hit all checkpoints, as well as the finish line for a lap to count. Let's take a look at how the checkpoints may be laid out:

Checkpoint generation for the racetrack.

Okay, well that's all set. But what have we really created here? In Graph Theory, we call this a Hamiltonian Circuit. Treat every checkpoint as a vertex in a graph. Thus, we have to go through every vertex and go back to the very beginning for a lap to count. Neat huh? Here's how the track may be represented as a graph:



Distance between players

Alright, we now have a simple mechanism for counting laps. But what about if you wanted to keep live statistics on how far away you are from your opponent racers? Well, that can be arranged quite easily with this setup as well. Brace yourself, it's about to get colourful.

Rather than doing complex math to compute distance across those, turns, we can skip that step entirely by just reinterpreting the graph in a different way. We can take the graph that we created up above and straighten it out. The only exception, of course, is the last vertex's edge connecting to the finish line. That'll loop back. Let's take a look at this...

Highlighting of "zones" by checkpoint.

Pay attention to the colours there. We're going to straighten this out. Behold:

Highlighted zones straightened out.

Now then, computing the distance between players is no longer a matter of a 2D computation, but a 1D one. The problem was reduced down to a simple subtraction between two player's positions. Looking at Slide 24, this computation should be obvious to perform:

Slide 27: PL vs. PN. The ultimate race.

Mario Kart Wii and the Ultra-Shortcut
In the previous section, I talked about how race progression can efficiently be done. I also showed how making a racetrack into a graph makes other problems become trivial to compute. But what happens whenever you do not follow it? Surely there's other solutions out there, right? Let's take a look at Mario Kart Wii, a racing game for the Nintendo Wii that does things just a tad bit different.

Let's assume the same extremely simple track as before. This game has different types of checkpoints:
  • Spawn Checkpoints: These are where you spawn back up if you fall out of bounds (e.g. out of the level).

  • Key Checkpoints: Passing these updates your "counter", which the game uses to tell where you were in the track. When passing the finish line, you must have last been in contact with the very last key checkpoint.

  • Finish Line: I think this is self-explanatory...

Let's look at the track with these added in:

Simple track with Spawn and Key checkpoints added in.

Looks a little different, huh? For the game to register that you've successfully hit a key checkpoint, you have to be between one and the next checkpoint (whether that's another key checkpoint, a spawn checkpoint, or a finish line). This will update an in-game counter to the index of the key checkpoint.

Confused? Here's what I mean. If you want the game to register that you've successfully hit checkpoint 1, you need to be within the highlighted region below:

Range for checking if you passed 1.

Alright... let's break this method apart.


There exists a critical flaw in the key checkpoint system this game uses that allows players to go through laps at a much faster pace than usual. Recall what I said earlier... "When passing the finish line, you must have last been in contact with the very last key checkpoint."

When the race starts, we are at Checkpoint 0, as we are at the finish line. If we go in reverse, we will hit the previous spawn checkpoint and the game will deduct a lap from the counter. However, if we can somehow jump behind that check and go to key checkpoint 3, hit it, then drive to the finish line, the game will actually count it as a lap, letting us skip 75-87.5% of the course.

For the record, I'm talking about the highlighted region here:

Range for checking if you passed 3.

The method I have described is known as an Ultra-Shortcut, and has actually been used in Mario Kart Wii to obliterate some world records. If you're curious by how much, check out the world records for Grumble Volcano. Pay specific attention to June 1st, 2008, as that's the day the Ultra-Shortcut method was discovered. The record went from 1'46"052 to 0'23"669 in a single day.

Moral of the Story: Use Graph Theory to guarantee proper lap completion. Trying to be "clever" like this is subject to flaws, and it was obviously abused after it was discovered.
Maze Generation w/ Disjoint-Sets & Union-Find
Maze Generation is an element in games that can give players a unique experience every time they play. Doing them efficiently can be accomplished via Graph Theory. If you attended my previous talk, you will know how powerful the Disjoint-Set data structure is for object detection. In this talk, however, we're going to use it to generate mazes... the right way.

Observing Properties of Mazes

Let's see what we got here... There are a few things about mazes that we should pay attention to prior to making one ourselves:
  • Cells are "matched" with a select few adjacent ones. Cells that have been matched do not have a wall between them.

  • All cell pairs that are not "matched" have a wall separating them.

  • Mazes can be represented as graphs. Depending on the properties of the maze, it can be a minimum spanning tree.

  • We can use typical graph algorithms to find solutions to mazes. Popular choices include DFS (Depth-First Search) and BFS (Breadth-First Search). We can use them to find a solution from any S to any T, easily.

Now then, let's talk about Disjoint-Sets...

The Disjoint-Set Data Structure

Initially, treat this data structure as if we have n disjoint sets (hence the name), not connected to each other at all. When thinking about a maze, treat this as a maze where all walls are up, and you can't go from any cell to any other cell. Then, we can use operations to manipulate this data structure and connect cells together.

We have two operations we can use: union and find
  • Union: Join two disjoint sets together.

  • Find: Get the vertex that is at the "root" of a disjoint set. This is the "ID" of that set.

Let's discuss them... For now, let's only talk about Union. Find has a neat optimisation that'll come in handy later.

Union Operation

This one is trivial. Join two disjoint sets together. For this part, I'm going to notate it as union(i, j) where Si = Si ⋃ Sj. In plain English: we merge the two sets Si and Sj into Si. Then, Sj is obliterated.

Let's show a visual example of this... It might make some things click more easily.

Example: Assume a graph M where n = 16 (there are 16 vertices) and m = 0 (there are 0 edges). Each separate vertex is part of its own set Si( v0 ∈ S0, v1 ∈ S1, ..., vn - 1 ∈ Sn - 1, ). Shown as a 4 × 4 grid, we have the following:

Maze with all walls up.

Now let's say we performed union(1, 2). The figures below show the selection of 1 and 2 (left) as well as the result of the union operation (right), visually:

Highlighting of 1 and 2 for union()

Post-union(). Wall between 1 and 2 is broken.

Post-union(), S1 = { 1, 2 }. S2 is empty and obliterated. How about we try a union(2, 6) now?

Highlighting of 2 and 6 for union()

Post-union(). Wall between 2 and 6 is broken.

To properly generate a maze, we can just keep repeating this procedure until there is only one set left.

Maze with no disjoint sets left.

At this point, the only remaining set is S0 = { v0, v1, ..., vn - 1 }. We are unable to merge anymore sets (and break anymore walls) because they all belong in the same set already. Any other walls being broken down will force a cycle to appear in the graph. Let's break down the kind of graph we have just created:
  • The maze generation algorithm we just used is known as Randomised Kruskal's Algorithm.

  • There are no cycles in this graph.

  • There is exactly one path from every S to every T.

  • If drawn as a graph, it is a minimal spanning tree.

  • It tends to generate mazes with patterns that are easy to solve.

Though, this wouldn't be a talk by me though if I didn't say we can do better, now would it? Let's expand on this concept:

A simple square maze is boring. We can do better.

We can connect 2 mazes together by breaking down a wall between them. We can even add a "hallway" between them if we wanted. This is only possible because there exists a path from every S to every T.

Thus, if we broke down a wall on the outer end of the maze and merged two together, there will always exist a path from one maze to the other, as there will always be a path to the cell with the outer wall we broke. Here's what I mean:

Two copies of M, with a hallway connecting v7 and v20 together.

This kind of "horizontal expansion" is possible with mazes. We can also do this vertically. Notice, though, that there is a valid path from anywhere in the left maze to anywhere in the right maze. To take this to the extreme, we can still do better, and expand the maze by an entire dimension (or a few, if we really wanted to). Let's give it a second floor...

Two copies of M, with an "elevator" connecting v6 and v22 together.

We can go on, but I think this gets the point across. We can also combine the horizontal/vertical expansion with this "elevator" to make some pretty unique (but also tedious) puzzles!

Find Operation

The "find" operation is used to find the ID of the set a vertex belongs in. I'll denote it as find(i). If S0 = { v0, v1 } and I call find(1), it'll return 0, because v1 ∈ S0. By the time the maze generation algorithm above is done, calling find() on any vertex will return 0, as they are all in S0.

Since the union(i, j) of two sets, in a graph theory sense, is simply connecting an edge between two points, the find(i) operation is simply going up to the root of the tree and returning that value. Let's go though the previous maze generation example once more. This time, let's see how a graph is built from all of this.

Maze with all walls up.

Graphical representation.

Maze post-union(1, 2).

Graphical representation.

Maze post-union(2, 6).

Graphical representation.

Maze fully processed.

Graphical representation.

Now that we've constructed the graph, let's order it to where the coloured node (the root) is at the top. It'll look like this:

Maze drawn as a "tree"-like graph.

There's something bad about this... Take a look at the deepest node in that tree. Since find just goes up to the top and returns the root node, it has to go through 6 vertices before it returns. That's an O(n) operation right there.

Now, I'm not going to make a huge deal out of a linear-time lookup. A maze of size 2048 × 2048 would speak for itself. But, like I said, we can do better... much better.

Improving "find(i)"

There are two techniques we can apply to our operations to make find() perform much better: Union by rank and Path compression...
  • Union by rank - When merging two sets, attach the shorter tree to the taller one. This forces minimal (or no) growth of the tree. In fact, at most, the tree can only grow in height by 1 from this.

  • Path compression - Make every node point straight to the root node.

The visuals of these two would get messy quite quickly, so I decided to not draw them out. But I think those explanations make it obvious how these improve on what we had before.

Now then... with these optimisations in place, our O(n) lookup time suddenly becomes lg* n (iterated logarithm base 2). In the world of Computer Science, this is essentially constant time. For the record, if x = 265536, then lg*(x) = 5. Here's a table of values just to show how slow the equation grows...

Values of lg* x.

For the record, na is not a typo. It's known as tetration, and is a step up from exponents. If I said 32, that's the same as 222. 42 = 2222. You get the idea.
bitDP, a theoretical approach
Back to Hamiltonian Paths. I've discussed bitDP from an algorithmic perspective before, but I haven't really gone over why it works. That's where this talk comes in.

What is a Hamiltonian Path?

Hamiltonian Paths are paths that traverse through every vertex in a graph exactly once. This is a more relaxed variant of Hamiltonian Circuits, which carry the additional requirement of needing the last vertex to have an edge that goes to the first. Regardless, the problem of finding these is NP-Complete.

"Why should I care? How can it be used in games?"

One pretty good situation I can immediately see is a puzzle game with unique and randomly generated graphs requiring you to find a path that visits every vertex (The game will have to check if a valid solution exists before it issues the graph to you). It's no fun if you have a puzzle that's literally impossible to solve, right?

Additionally, bitDP will generate a DP table, allowing you to check, efficiently if a player's path of choice is also a Hamiltonian Path. More on that in a bit.

Traditional Hamiltonian Path Detection: DFS

The naïve approach is the pure brute-force approach. Performing a DFS on the graph from any vertex will eventually find a valid Hamiltonian Path if one exists in the graph. The procedure works as follows:

  1. Every vertex starts as unvisited.

  2. Keep a list of vertices visited (in the order we visited them). We'll call this V'(G) = {}.

  3. Choose a starting vertex S. This will be the current vertex v for the following steps as well.

  1. Mark v as visited and add it to the end of V'(G).

  2. Go through every vertex that v is connected to and try to repeat the procedure, recursively. This is done "for each" vertex, one at a time.

  3. If the size of V'(G) is equal to the number of vertices in G, a Hamiltonian Path was found!

  4. After exhausting all vertices connected to v, remove it from V'(G) and mark it as unvisited so another potential path can include it in it's traversal.

On slide 60, this is given a mathematical definition as follows:

Slide 60: DFS Mathematical Breakdown

This "algorithm" is what we do when we are actually looking for a Hamiltonian Path in-person, outside of computation. It's effective, but not efficient, sporting a factorial time complexity. This is shown on slide 66:

Slide 66: DFS performance with n vertices in G measured in seconds.

To a computer, and especially a game, we need a more optimal approach if we want the procedure to be transparent to the end user, especially in situations of larger graphs. Let's talk.

Dynamic Programming

This time, I am not going to assume that the audience has a background in Computer Science. To discuss a more efficient means of finding a Hamiltonian Path, let's define dynamic programming.

So what is it? It's a mathematical optimisation made by Richard Bellman to simplify solving a complex problem by breaking it down into "sub-problems" that are easier to solve. These sub-problems are then broken into even smaller sub-problems, if possible. This occurs recursively. The smaller problems then are solved, and their solution is used to efficiently compute the result of the more difficult problem. Eventually, you'll go back to the original problem and be able to solve it more efficiently with the results from it's sub-problems.

Quite wordy, don't you think? Let's look at a Dynamic Programming Algorithm to solve the Hamiltonian Path problem. Maybe it'll click then if it didn't now.

The Held-Karp Algorithm

This algorithm was developed by Michael Held and Richard Karp, as well as being independently being developed by Richard Bellman in 1962 as a way to solve TSP (Travelling Salesman Problem). While we won't talk about TSP, we can use this algorithm to solve Hamiltonian Path detection more efficiently. In fact, it does it with 2N × N2 computations. That's much better than N! computations.

So how does it work? Let's make an observation about graphs, as well as their subgraphs. Behold slide 70, which introduces a graph G where V(G) = { v0, v1, v2, v3 }, and two subgraphs, G' and H:

Slide 70: An observation of G and how subgraphs may be used to determine hamiltonian paths.

It's trivial to tell that G' has a hamiltonian path, simply being v0 -> v1. Compare each end of the hamiltonian path with any vertex in H. If they are adjacent in G (e.g. v0 -> v2), then there's a hamiltonian path that particular subgraph as well. We didn't have to check every vertex for a valid path. We only had to check one of the ends and see if they connected. Slide 72 visually shows this with the same graph, but also I, where V(I) = { v0, v1, v2 }:

Slide 72: Notice how we can reach v2 from v0 (there are a few other examples).

An easier way to look at it (from Talk 3), if we can go A -> B and B -> C, then there exists a path A -> B -> C. Now let's see how to computationally store this. Enter bitDP.

Representing subgraphs via bitDP

Storing the results on a computer is as simple as using a table. The table is sized N × 2N, since there are 2N possible sub-problems in a graph of N vertices. In the graph G above, there are 4 vertices. These can be represented as bits where 0 means the vertex isn't in the subgraph, and 1 means it is in the subgraph. So if I had 1111, then the subgraph has all 4 vertices. If I had 1101, then the subgraph has v0, v2, and v3 (It's read from right-to-left), and so on.

This table method, known as bitDP (ビット(B I T)動的計画法(どうてきけいかくほう) bitto dōteki keikakuhō, Lit. Bit Dynamic Programming) is how we will store the results of subgraph hamiltonian paths, so we don't have to keep repeating traversals we already know the answer to. Behold the following blank bitDP table:

Vertex/Mask 0 1 2 3 4 5 6 7 8 9 A B C D E F
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The rows are the vertices and the columns are the subgraphs expressed as bitmasks (the 0-1 format discussed earlier).

Reading a bitDP table

Unlike Talk 3, I only discuss how to read a bitDP table in this talk. Behold the filled out bitDP table for graph G above:

Vertex/Mask 0 1 2 3 4 5 6 7 8 9 A B C D E F
0 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 1
1 0 0 1 1 0 0 1 1 0 0 1 0 0 0 1 1
2 0 0 0 0 1 1 1 1 0 0 0 0 1 0 1 1
3 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 1

This looks a little intimidating, but slide 76 covers how to read it:

Slide 76: Example of how to read column 0xB (1011) in the bitDP table.

Each column is it's own subgraph. Values are set to 1 if there is a hamiltonian path in that subgraph that ends at that row's vertex. For instance, column 0xB, row 0, if we look at the subgraph, there does exist a hamiltonian path that ends at v0. The same can be said about v3, hence why row 3 is also 1. Overall:
  • There exists a path that ends at Node 0 (3 -> 1 -> 0).
  • There is not a path that ends at Node 1.
  • There exists a path that ends at Node 3 (0 -> 1 -> 3).
In this talk, we are not to discuss algorithms in depth, so I do not go over the creation of a bitDP table. However, I do go over it in Talk 3.

The speed comparison of solving subgraphs this way has a very obvious speed gain. Behold:

Slide 77: DFS vs Held-Karp (via bitDP) performance.

Honourable Mention: Entombed (1982 Atari 2600 game)
Yes, there's a "Part II" to maze generation in this talk! There is one game on there that is an unsolved problem in computer science. The game in question is Entombed, released in 1982 for the Atari 2600.

The game's structure is fairly simple. You move a player down a maze, avoid enemies, and don't get stuck in a dead end. The screen scrolls upward and you must make your way down to the end of the maze. Contact with an enemy or being trapped in a dead end results in a game over.

So what makes this so interesting? The maze generation "algorithm" itself! Storing all of these mazes is impossible on an Atari 2600 cartridge, so the maze was generated "on-the-fly" with a lookup table. They didn't use Disjoint-Sets like discussed earlier in this talk. So, what is this magical table? Slide 86 shows as:

Slide 86: Lookup table for maze generation in Entombed rom. "?" represents a random value.

The value of "x" is determined by "a", "b", "c", "d", and "e", which are nearby cells highlighted in Slide 86 above.

So... why does this work? Well, no one knows. The developer handed the maze generation task off to another programmer, who got drunk when developing it. In an interview, the programmer said "He told me it came upon him when he was drunk and whacked out of his brain".

How does it relate to Graph Theory? Well, it's an unsolved problem, and the other maze generation algorithms rely on Graph Theory to work. So, how do we know Graph Theory can't prove that this works? But I digress, appearently you have to be drunk to make cool stuff...
These were my sources for this talk. Yes, I cite myself once. Go there if you want external sources for bitDP.
Question Answer Key
These questions were shown at the beginning and end of the presentation. Their answers were present in the slides and I went over them as well.

  1. "Given a 3D model M of n vertices, how many triangles are drawn if done via Triangle List?"

    n / 3

  2. "What is lg*(2265536)? Alternatively, what is the lg*(62)?"

    6. Alternative was given to make the problem potentially easier to do. 62 = 2265536. Lastly, lg*(n2) = n

  3. "What does bitDP stand for?"

    Bit Dynamic Programming. "ビット(B I T)動的計画法(どうてきけいかくほう)" is also acceptable, but realistically I doubt someone would write it down. (Post-talk edit: Yup, no one wrote that down.)