Introducing the Prophet Ontology
Read article →
Chanzo Bryan
Building scalable backend services at Netflix. Expert in creating high-performance event-driven systems that handle billions of events.
Creating AI agents that master complex games. Built environments for Terra Mystica, SpeedRunners, and research into multi-agent learning systems.
From React Native mobile apps to Rust-based game engines. Comfortable across the entire stack with a focus on performance and user experience.
Ontology-first backend scaffolding across Java, Node, and Python
A multi-stack code generation platform that compiles a single domain ontology into SQL, OpenAPI, and framework-specific integration code. Prophet reduces contract drift by generating typed actions, query APIs, persistence adapters, and migration artifacts with deterministic outputs and compatibility checks.
View Project Details →
AI-powered bill splitting mobile app
A React Native mobile app for seamless expense tracking with AI-powered receipt scanning. Built with TypeScript, Expo, and Supabase, it features real-time sync, passwordless authentication, and intelligent OCR to automatically parse receipts and split bills among groups.
View Project Details →
High-performance board game AI environment
A rules-complete implementation of Terra Mystica in Rust, exposed as a PettingZoo/PyTorch RL environment via PyO3. The core engine leverages Rust's performance guarantees for maximum simulation speed, while Python bindings provide seamless integration with modern RL frameworks. Includes Pygame visualization for debugging agent behavior.
View Project Details →Modular reinforcement learning library
A from-scratch RL library implementing state-of-the-art algorithms including SAC, DQN, IQN, RND, Ape-X, and R2D2. Features a flexible wrapper-based architecture that allows algorithms to be composed and mixed, with backend-agnostic core abstractions for portability. Optimized for single-machine high-performance training without sacrificing code clarity.
View Project Details →Deep RL agents playing SpeedRunners via direct game injection
A modular reinforcement learning system that trains agents to play SpeedRunners using direct game process injection. A C++ DLL hooks into the game to extract state and inject actions, exposing a Gymnasium-compatible environment via named pipes. Agents are trained using Rainbow IQN and RND in PyTorch, with a clean separation between game interfacing (sr-lib), environment wrapping (sr-gym), and agent training (sr-ai).
View Project Details →