AI Pioneer Debut: Maze SLAM #
1. Simulation Driver #
The simulation loop is implemented inside _tick() and scheduled using tkinter:
def _tick(self) -> None:
self.frame_count += 1
if not self.flood.done:
self.flood.step()
if self.astar.status == "searching":
for _ in range(self.astar_expansions):
self.astar.step()
if self.astar.status != "searching":
break
elif self.astar.status == "found" and self.astar_anim_index < len(self.astar.path) - 1:
self.astar_anim_index += 1
self.astar_runner_cell = self.astar.path[self.astar_anim_index]
self._update_animation_state()
if self.frame_count % self.render_every == 0:
self._draw_visualizations()
self._update_info()
self.root.after(self.tick_ms, self._tick)
What this does, precisely #
- Floodfill advances by one exploration step per frame.
- A* expands multiple nodes per frame (
--astar-expansions). - The mouse markers are interpolated via
_update_animation_state()for smoother motion. - Rendering is throttled by
render_every, then the next frame is queued withafter(...).
2. Maze Representation and Generation #
2D (class Maze)
#
walls[(r,c)]storesN/E/S/Wbooleans.neighbor(...),has_wall(...), andcan_move(...)are movement primitives.goal_cells()returns center cell(s): 1 center for odd grids, 4 centers for even grids.
Generation is DFS maze carving + random extra openings:
start = (0, 0)
stack = [start]
visited = {start}
while stack:
current = stack[-1]
candidates = [(nxt, d) for nxt, d in self.neighbors(current) if nxt not in visited]
if not candidates:
stack.pop()
continue
nxt, d = self.rng.choice(candidates)
self._remove_wall(current, nxt, d)
visited.add(nxt)
stack.append(nxt)
Then it injects extra openings:
extra_openings = max(1, (self.size * self.size) // 12)
3. Floodfill Explorer Internals #
FloodFillExplorer (2D)
#
State model:
known[cell][dir]in{UNKNOWN, OPEN, WALL}.- Boundary walls are initialized as known walls.
distancesis recomputed by reverse BFS from goal set.
Core update:
def step(self) -> None:
if self.done:
return
self._sense_cell(self.position)
self.distances = self._compute_distances()
if self.position in self.goals:
self.done = True
return
nxt = self._pick_next_cell()
if nxt is None:
self.done = True
return
self.position = nxt
self.trail.append(nxt)
self.steps += 1
if self.steps >= self.max_steps:
self.done = True
Decision policy in _pick_next_cell():
- candidate neighbors must be
OPENinknown. - chooses minimum of
(distance_from_floodfill, goal_manhattan)for tie-breaking.
Step budget in the current repo implementation:
self.max_steps = self.maze.size * self.maze.size * 8
4. A* Runner Internals #
AStarRunner (2D)
#
- Priority queue stores
(f_score, counter, node)for deterministic tie handling. - Uses:
open_heap(heapq frontier)open_set(fast membership)closedcame_from,g_score
Expansion step:
for _, _, d in DIRS:
if self.maze.has_wall(node, d):
continue
nxt = self.maze.neighbor(node, d)
if nxt is None or nxt in self.closed:
continue
trial = self.g_score[node] + 1
if trial < self.g_score.get(nxt, 10**9):
self.came_from[nxt] = node
self.g_score[nxt] = trial
self._counter += 1
f_score = trial + self._heuristic(nxt)
heapq.heappush(self.open_heap, (f_score, self._counter, nxt))
self.open_set.add(nxt)
Heuristic:
min(abs(cell[0]-g[0]) + abs(cell[1]-g[1]) for g in self.goals)
5. Results #
important
Used References: