Compare commits
23 Commits
rule-0.0.2
...
ui-0.13.0
| Author | SHA1 | Date | |
|---|---|---|---|
| 33e785d22a | |||
| d16cec176b | |||
| 8744bee2dd | |||
| 5f4d33f3ca | |||
| 767d3051a7 | |||
| b2e62dc60c | |||
| b0399a4e48 | |||
| ec2ab2f365 | |||
| fd4e67d4f7 | |||
| 3cb3160731 | |||
| dbcafd2869 | |||
| 3ecb2c9d66 | |||
| 9ad11fb97a | |||
| e158b0a7f0 | |||
| f1c9df16b6 | |||
| 9d11d25b99 | |||
| 7a045d31d7 | |||
| b518c704fa | |||
| fe8e3c0539 | |||
| 1b16adcc72 | |||
| b4bc72f7e4 | |||
| 8959c3a849 | |||
| 47032378e2 |
@@ -1,9 +0,0 @@
|
||||
# Memory Index
|
||||
|
||||
## Feedback
|
||||
- [feedback_keep_structure_updated.md](feedback_keep_structure_updated.md) — Update structure memory files whenever source files are added, removed, or changed
|
||||
|
||||
## Project Structure
|
||||
- [project_structure_root.md](project_structure_root.md) — Top-level layout, modules list, VERSIONS map, navigation rules (skip `build/`, `.gradle/`, `.idea/`)
|
||||
- [project_structure_api.md](project_structure_api.md) — `modules/api`: all files and types (Board, Piece, Square, GameState, Move, ApiResponse, PlayerInfo)
|
||||
- [project_structure_core.md](project_structure_core.md) — `modules/core`: all files and types (GameContext, GameRules, MoveValidator, GameController, Parser, Renderer)
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
name: keep-structure-memory-updated
|
||||
description: Always update the project structure memory files when adding, removing, or changing source files
|
||||
type: feedback
|
||||
---
|
||||
|
||||
After any change that adds, removes, renames, or significantly alters a source file, update the relevant structure memory file:
|
||||
|
||||
- New/renamed/deleted file in `modules/api` → update `project_structure_api.md`
|
||||
- New/renamed/deleted file in `modules/core` → update `project_structure_core.md`
|
||||
- New module, dependency version change, or new top-level directory → update `project_structure_root.md`
|
||||
- New module added → create a new `project_structure_<module>.md` and add it to `MEMORY.md`
|
||||
|
||||
**Why:** Structure memories are the primary navigation aid. Stale entries cause wasted exploration.
|
||||
|
||||
**How to apply:** Treat the structure memory update as part of completing any implementation task — do it in the same session, not as a follow-up.
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
name: module-api-structure
|
||||
description: File and type overview for the modules/api module (shared domain types)
|
||||
type: project
|
||||
---
|
||||
|
||||
# Module: `modules/api`
|
||||
|
||||
**Purpose:** Shared domain model — pure data types with no game logic. Depended on by `modules/core`.
|
||||
|
||||
**Gradle:** `id("scala")`, no `application` plugin. No Quarkus. Uses scoverage plugin.
|
||||
|
||||
**Package root:** `de.nowchess.api`
|
||||
|
||||
## Source files (`src/main/scala/de/nowchess/api/`)
|
||||
|
||||
### `board/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `Board.scala` | `opaque type Board = Map[Square, Piece]` — extensions: `pieceAt`, `withMove`, `pieces`; `Board.initial` sets up start position |
|
||||
| `Color.scala` | `enum Color { White, Black }` — `.opposite`, `.label` |
|
||||
| `Piece.scala` | `case class Piece(color, pieceType)` — convenience vals `WhitePawn`…`BlackKing` |
|
||||
| `PieceType.scala` | `enum PieceType { Pawn, Knight, Bishop, Rook, Queen, King }` — `.label` |
|
||||
| `Square.scala` | `enum File { A–H }`, `enum Rank { R1–R8 }`, `case class Square(file, rank)` — `.toString` algebraic, `Square.fromAlgebraic(s)` |
|
||||
|
||||
### `game/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `GameState.scala` | `case class CastlingRights(kingSide, queenSide)` + `.None`/`.Both`; `enum GameResult { WhiteWins, BlackWins, Draw }`; `enum GameStatus { NotStarted, InProgress, Finished(result) }`; `case class GameState(piecePlacement, activeColor, castlingWhite, castlingBlack, enPassantTarget, halfMoveClock, fullMoveNumber, status)` — FEN-compatible snapshot |
|
||||
|
||||
### `move/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `Move.scala` | `enum PromotionPiece { Knight, Bishop, Rook, Queen }`; `enum MoveType { Normal, CastleKingside, CastleQueenside, EnPassant, Promotion(piece) }`; `case class Move(from, to, moveType = Normal)` |
|
||||
|
||||
### `player/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `PlayerInfo.scala` | `opaque type PlayerId = String`; `case class PlayerInfo(id: PlayerId, displayName: String)` |
|
||||
|
||||
### `response/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `ApiResponse.scala` | `sealed trait ApiResponse[+A]` → `Success[A](data)` / `Failure(errors)`; `case class ApiError(code, message, field?)`; `case class Pagination(page, pageSize, totalItems)` + `.totalPages`; `case class PagedResponse[A](items, pagination)` |
|
||||
|
||||
## Test files (`src/test/scala/de/nowchess/api/`)
|
||||
Mirror of main structure — one `*Test.scala` per source file using `AnyFunSuite with Matchers`.
|
||||
|
||||
## Notes
|
||||
- `GameState` is FEN-style but `Board` (in `core`) is a `Map[Square,Piece]` — the two are separate representations
|
||||
- `CastlingRights` is defined here in `api`; the castling logic lives in `core`
|
||||
@@ -1,48 +0,0 @@
|
||||
---
|
||||
name: module-core-structure
|
||||
description: File and type overview for the modules/core module (TUI chess engine)
|
||||
type: project
|
||||
---
|
||||
|
||||
# Module: `modules/core`
|
||||
|
||||
**Purpose:** Standalone TUI chess application. All game logic, move validation, rendering. Depends on `modules/api`.
|
||||
|
||||
**Gradle:** `id("scala")` + `application` plugin. Main class: `de.nowchess.chess.Main`. Uses scoverage plugin.
|
||||
|
||||
**Package root:** `de.nowchess.chess`
|
||||
|
||||
## Source files (`src/main/scala/de/nowchess/chess/`)
|
||||
|
||||
### Root
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `Main.scala` | Entry point — prints welcome, starts `GameController.gameLoop(GameContext.initial, Color.White)` |
|
||||
|
||||
### `controller/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `GameController.scala` | `sealed trait MoveResult` ADT: `Quit`, `InvalidFormat`, `NoPiece`, `WrongColor`, `IllegalMove`, `Moved`, `MovedInCheck`, `Checkmate`, `Stalemate`; `object GameController` — `processMove(ctx, turn, raw): MoveResult` (pure), `gameLoop(ctx, turn)` (I/O loop), `applyRightsRevocation(...)` (castling rights bookkeeping) |
|
||||
| `Parser.scala` | `object Parser` — `parseMove(input): Option[(Square, Square)]` parses coordinate notation e.g. `"e2e4"` |
|
||||
|
||||
### `logic/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `GameContext.scala` | `enum CastleSide { Kingside, Queenside }`; `case class GameContext(board, whiteCastling, blackCastling)` — `.castlingFor(color)`, `.withUpdatedRights(color, rights)`; `GameContext.initial`; `extension (Board).withCastle(color, side)` moves king+rook atomically |
|
||||
| `GameRules.scala` | `enum PositionStatus { Normal, InCheck, Mated, Drawn }`; `object GameRules` — `isInCheck(board, color)`, `legalMoves(ctx, color): Set[(Square,Square)]`, `gameStatus(ctx, color): PositionStatus` |
|
||||
| `MoveValidator.scala` | `object MoveValidator` — `isLegal(board, from, to)`, `legalTargets(board, from): Set[Square]` (board-only, no castling), `legalTargets(ctx, from)` (context-aware, includes castling), `isCastle`, `castleSide`, `castlingTargets(ctx, color)` — full castling legality (empty squares, no check through transit) |
|
||||
|
||||
### `view/`
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `Renderer.scala` | `object Renderer` — `render(board): String` outputs ANSI-colored board with file/rank labels |
|
||||
| `PieceUnicode.scala` | `extension (Piece).unicode: String` maps each piece to its Unicode chess symbol |
|
||||
|
||||
## Test files (`src/test/scala/de/nowchess/chess/`)
|
||||
Mirror of main structure — one `*Test.scala` per source file using `AnyFunSuite with Matchers`.
|
||||
|
||||
## Key design notes
|
||||
- `MoveValidator` has two overloaded `legalTargets`: one takes `Board` (geometry only), one takes `GameContext` (adds castling)
|
||||
- `GameRules.legalMoves` filters by check — it calls `MoveValidator.legalTargets(ctx, from)` then simulates each move
|
||||
- Castling rights revocation is in `GameController.applyRightsRevocation`, triggered after every move
|
||||
- No `@QuarkusTest` — this module is a plain Scala application, not a Quarkus service
|
||||
@@ -1,55 +0,0 @@
|
||||
---
|
||||
name: project-root-structure
|
||||
description: Top-level project structure, modules list, and navigation notes for NowChessSystems
|
||||
type: project
|
||||
---
|
||||
|
||||
# NowChessSystems — Root Structure
|
||||
|
||||
## Directory layout (skip `build/`, `.gradle/`, `.idea/`)
|
||||
|
||||
```
|
||||
NowChessSystems/
|
||||
├── build.gradle.kts # Root: sonarqube plugin, VERSIONS map
|
||||
├── settings.gradle.kts # include(":modules:core", ":modules:api")
|
||||
├── gradlew / gradlew.bat
|
||||
├── CLAUDE.md # Project instructions for Claude Code
|
||||
├── .claude/
|
||||
│ ├── CLAUDE.MD # Working agreement (plan/verify/unresolved)
|
||||
│ ├── settings.json
|
||||
│ └── agents/ # architect, code-reviewer, gradle-builder, scala-implementer, test-writer
|
||||
├── docs/
|
||||
│ ├── Claude-Skills.md
|
||||
│ ├── Security.md
|
||||
│ └── unresolved.md
|
||||
├── jacoco-reporter/ # Python scripts for coverage gap reporting
|
||||
└── modules/
|
||||
├── api/ # Shared domain types (no logic)
|
||||
└── core/ # TUI chess engine + game logic
|
||||
```
|
||||
|
||||
## Modules
|
||||
|
||||
| Module | Gradle path | Purpose |
|
||||
|--------|-------------|---------|
|
||||
| `api` | `:modules:api` | Shared domain model: Board, Piece, Move, GameState, ApiResponse |
|
||||
| `core` | `:modules:core` | TUI chess app: game logic, move validation, rendering |
|
||||
|
||||
`core` depends on `api` via `implementation(project(":modules:api"))`.
|
||||
|
||||
## VERSIONS (root `build.gradle.kts`)
|
||||
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| `QUARKUS_SCALA3` | 1.0.0 |
|
||||
| `SCALA3` | 3.5.1 |
|
||||
| `SCALA_LIBRARY` | 2.13.18 |
|
||||
| `SCALATEST` | 3.2.19 |
|
||||
| `SCALATEST_JUNIT` | 0.1.11 |
|
||||
| `SCOVERAGE` | 2.1.1 |
|
||||
|
||||
## Navigation rules
|
||||
- **Always skip** `build/`, `.gradle/`, `.idea/` when exploring — they are generated artifacts
|
||||
- Tests use `AnyFunSuite with Matchers` (ScalaTest), not JUnit `@Test`
|
||||
- No Quarkus in current modules — Quarkus is planned for future services
|
||||
- Agent workflow: architect → scala-implementer → test-writer → gradle-builder → code-reviewer
|
||||
@@ -0,0 +1,334 @@
|
||||
# NowChessSystems — AI Context Map
|
||||
|
||||
> **Stack:** raw-http | none | unknown | scala
|
||||
|
||||
> 0 routes | 0 models | 0 components | 63 lib files | 1 env vars | 1 middleware
|
||||
> **Token savings:** this file is ~0 tokens. Without it, AI exploration would cost ~0 tokens. **Saves ~0 tokens per conversation.**
|
||||
|
||||
---
|
||||
|
||||
# Libraries
|
||||
|
||||
- `jacoco-reporter/scoverage_coverage_gaps.py`
|
||||
- function parse_scoverage_xml: (xml_path) -> tuple[dict, list[ClassGap]]
|
||||
- function format_agent: (project_stats, classes) -> str
|
||||
- function format_json: (project_stats, classes) -> str
|
||||
- function format_markdown: (project_stats, classes) -> str
|
||||
- function format_module_gaps: (module_name, classes, stmt_pct) -> str
|
||||
- function run_scan_modules: (modules_dir, package_filter, min_coverage) -> None
|
||||
- _...4 more_
|
||||
- `jacoco-reporter/test_gaps.py`
|
||||
- function parse_suite_xml: (xml_path) -> SuiteResult
|
||||
- function load_module: (module_dir, results_subdir) -> Optional[ModuleResult]
|
||||
- function format_module: (mod) -> str
|
||||
- function run: (modules_dir, results_subdir, module_filter) -> None
|
||||
- function main: () -> None
|
||||
- class TestCase
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala`
|
||||
- class Board
|
||||
- function apply
|
||||
- function pieceAt
|
||||
- function updated
|
||||
- function removed
|
||||
- function withMove
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/CastlingRights.scala`
|
||||
- function hasAnyRights
|
||||
- function hasRights
|
||||
- function revokeColor
|
||||
- function revokeKingSide
|
||||
- function revokeQueenSide
|
||||
- class CastlingRights
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` — function opposite, function label
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` — class Piece
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` — function label
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala`
|
||||
- class Square
|
||||
- function fromAlgebraic
|
||||
- function offset
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`
|
||||
- function withBoard
|
||||
- function withTurn
|
||||
- function withCastlingRights
|
||||
- function withEnPassantSquare
|
||||
- function withHalfMoveClock
|
||||
- function withMove
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/player/PlayerInfo.scala` — class PlayerId, function apply
|
||||
- `modules/api/src/main/scala/de/nowchess/api/response/ApiResponse.scala`
|
||||
- class ApiResponse
|
||||
- function error
|
||||
- function totalPages
|
||||
- `modules/bot/python/nnue.py`
|
||||
- function get_weights_dir: ()
|
||||
- function get_data_dir: ()
|
||||
- function list_checkpoints: ()
|
||||
- function migrate_legacy_data: ()
|
||||
- function show_header: ()
|
||||
- function show_checkpoints_table: ()
|
||||
- _...10 more_
|
||||
- `modules/bot/python/src/dataset.py`
|
||||
- function get_datasets_dir: () -> Path
|
||||
- function next_dataset_version: () -> int
|
||||
- function list_datasets: () -> List[Tuple[int, Dict]]
|
||||
- function load_dataset_metadata: (version) -> Optional[Dict]
|
||||
- function save_dataset_metadata: (version, metadata) -> None
|
||||
- function create_dataset: (version, labeled_jsonl_path, sources, stockfish_depth) -> Path
|
||||
- _...4 more_
|
||||
- `modules/bot/python/src/export.py` — function export_to_nbai: (weights_file, output_file, trained_by, train_loss)
|
||||
- `modules/bot/python/src/generate.py` — function play_random_game_and_collect_positions: (output_file, total_positions, samples_per_game, min_move, max_move, num_workers)
|
||||
- `modules/bot/python/src/label.py` — function normalize_evaluation: (cp_value, method, scale), function label_positions_with_stockfish: (positions_file, output_file, stockfish_path, batch_size, depth, verbose, normalize, num_workers)
|
||||
- `modules/bot/python/src/tactical_positions_extractor.py`
|
||||
- function download_and_extract_puzzle_db: (url, output_dir)
|
||||
- function extract_puzzle_positions: (puzzle_csv, max_puzzles) -> Set[str]
|
||||
- function load_positions_from_file: (file_path) -> Set[str]
|
||||
- function merge_positions: (tactical, other, output_file)
|
||||
- function extract_tactical_only: (puzzle_csv, output_file, max_puzzles) -> int
|
||||
- function interactive_merge_positions: (puzzle_csv, output_file, max_puzzles)
|
||||
- `modules/bot/python/src/train.py`
|
||||
- function fen_to_features: (fen)
|
||||
- function find_next_version: (base_name)
|
||||
- function save_metadata: (weights_file, metadata)
|
||||
- function train_nnue: (data_file, output_file, epochs, batch_size, lr, checkpoint, stockfish_depth, use_versioning, early_stopping_patience, weight_decay, subsample_ratio)
|
||||
- function burst_train: (data_file, output_file, duration_minutes, epochs_per_season, early_stopping_patience, batch_size, lr, initial_checkpoint, stockfish_depth, use_versioning, weight_decay, subsample_ratio)
|
||||
- class NNUEDataset
|
||||
- _...1 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`
|
||||
- class Bot
|
||||
- function name
|
||||
- function nextMove
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotController.scala`
|
||||
- class BotController
|
||||
- function getBot
|
||||
- function listBots
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala`
|
||||
- class BotMoveRepetition
|
||||
- function blockedMoves
|
||||
- function repeatedMove
|
||||
- function filterAllowed
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Config.scala` — class Config
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/ai/Evaluation.scala`
|
||||
- class Evaluation
|
||||
- class CHECKMATE_SCORE
|
||||
- class DRAW_SCORE
|
||||
- function evaluate
|
||||
- function initAccumulator
|
||||
- function copyAccumulator
|
||||
- _...2 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`
|
||||
- class EvaluationClassic
|
||||
- function evaluate
|
||||
- function countRay
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/EvaluationNNUE.scala` — class EvaluationNNUE, function evaluate
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`
|
||||
- class NNUE
|
||||
- function initAccumulator
|
||||
- function pushAccumulator
|
||||
- function copyAccumulator
|
||||
- function recomputeAccumulator
|
||||
- function validateAccumulator
|
||||
- _...4 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiLoader.scala`
|
||||
- class NbaiLoader
|
||||
- function load
|
||||
- function loadDefault
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiMigrator.scala` — class NbaiMigrator, function migrateFromBin
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiModel.scala`
|
||||
- function toJson
|
||||
- class NbaiMetadata
|
||||
- function fromJson
|
||||
- function str
|
||||
- function num
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiWriter.scala` — class NbaiWriter, function write
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`
|
||||
- function bestMove
|
||||
- function bestMove
|
||||
- function bestMoveWithTime
|
||||
- function bestMoveWithTime
|
||||
- function loop
|
||||
- function loop
|
||||
- _...2 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`
|
||||
- class MoveOrdering
|
||||
- class OrderingContext
|
||||
- function addKillerMove
|
||||
- function getKillerMoves
|
||||
- function addHistory
|
||||
- function getHistory
|
||||
- _...3 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/TranspositionTable.scala`
|
||||
- function probe
|
||||
- function store
|
||||
- function clear
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotBook.scala` — function probe, function select
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala` — class PolyglotHash, function hash
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/ZobristHash.scala`
|
||||
- class ZobristHash
|
||||
- function hash
|
||||
- function nextHash
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/command/Command.scala`
|
||||
- class Command
|
||||
- function execute
|
||||
- function undo
|
||||
- function description
|
||||
- class MoveResult
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/command/CommandInvoker.scala`
|
||||
- class CommandInvoker
|
||||
- function execute
|
||||
- function undo
|
||||
- function redo
|
||||
- function history
|
||||
- function getCurrentIndex
|
||||
- _...3 more_
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/controller/Parser.scala` — class Parser, function parseMove
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/engine/GameEngine.scala`
|
||||
- class GameEngine
|
||||
- function isPendingPromotion
|
||||
- function board
|
||||
- function turn
|
||||
- function context
|
||||
- function canUndo
|
||||
- _...11 more_
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/observer/Observer.scala`
|
||||
- function context
|
||||
- class Observer
|
||||
- function onGameEvent
|
||||
- class Observable
|
||||
- function subscribe
|
||||
- function unsubscribe
|
||||
- _...1 more_
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextExport.scala` — class GameContextExport, function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextImport.scala` — class GameContextImport, function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameFileService.scala`
|
||||
- class GameFileService
|
||||
- function saveGameToFile
|
||||
- function loadGameFromFile
|
||||
- class FileSystemGameService
|
||||
- function saveGameToFile
|
||||
- function loadGameFromFile
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenExporter.scala`
|
||||
- class FenExporter
|
||||
- function boardToFen
|
||||
- function gameContextToFen
|
||||
- function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala`
|
||||
- class FenParser
|
||||
- function parseFen
|
||||
- function importGameContext
|
||||
- function parseBoard
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserCombinators.scala`
|
||||
- class FenParserCombinators
|
||||
- function parseFen
|
||||
- function parseBoard
|
||||
- function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserFastParse.scala`
|
||||
- class FenParserFastParse
|
||||
- function parseFen
|
||||
- function parseBoard
|
||||
- function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserSupport.scala` — function buildSquares
|
||||
- `modules/io/src/main/scala/de/nowchess/io/json/JsonExporter.scala` — class JsonExporter, function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/json/JsonParser.scala` — class JsonParser, function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/pgn/PgnExporter.scala`
|
||||
- class PgnExporter
|
||||
- function exportGameContext
|
||||
- function exportGame
|
||||
- `modules/io/src/main/scala/de/nowchess/io/pgn/PgnParser.scala`
|
||||
- class PgnParser
|
||||
- function validatePgn
|
||||
- function importGameContext
|
||||
- function parsePgn
|
||||
- function parseAlgebraicMove
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala`
|
||||
- class RuleSet
|
||||
- function candidateMoves
|
||||
- function legalMoves
|
||||
- function allLegalMoves
|
||||
- function isCheck
|
||||
- function isCheckmate
|
||||
- _...4 more_
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala`
|
||||
- class DefaultRules
|
||||
- function loop
|
||||
- function toMoves
|
||||
- function loop
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/Main.scala` — class Main, function main
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/ChessBoardView.scala`
|
||||
- class ChessBoardView
|
||||
- function updateBoard
|
||||
- function updateUndoRedoButtons
|
||||
- function showMessage
|
||||
- function showPromotionDialog
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/ChessGUI.scala`
|
||||
- class ChessGUIApp
|
||||
- class ChessGUILauncher
|
||||
- function getEngine
|
||||
- function launch
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/GUIObserver.scala` — class GUIObserver
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/PieceSprites.scala`
|
||||
- class PieceSprites
|
||||
- function loadPieceImage
|
||||
- class SquareColors
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/terminal/TerminalUI.scala` — class TerminalUI, function start
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/utils/PieceUnicode.scala` — function unicode
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/utils/Renderer.scala` — class Renderer, function render
|
||||
|
||||
---
|
||||
|
||||
# Config
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `STOCKFISH_PATH` **required** — modules/bot/python/nnue.py
|
||||
|
||||
---
|
||||
|
||||
# Middleware
|
||||
|
||||
## custom
|
||||
- generate — `modules/bot/python/src/generate.py`
|
||||
|
||||
---
|
||||
|
||||
# Dependency Graph
|
||||
|
||||
## Most Imported Files (change these carefully)
|
||||
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala` — imported by **60** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/move/Move.scala` — imported by **40** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala` — imported by **39** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` — imported by **36** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala` — imported by **22** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` — imported by **21** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` — imported by **21** files
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala` — imported by **17** files
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala` — imported by **10** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala` — imported by **10** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/CastlingRights.scala` — imported by **8** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextImport.scala` — imported by **8** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotBook.scala` — imported by **5** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotDifficulty.scala` — imported by **5** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextExport.scala` — imported by **5** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala` — imported by **4** files
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/observer/Observer.scala` — imported by **4** files
|
||||
|
||||
## Import Map (who imports what)
|
||||
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala`, `modules/bot/src/main/scala/de/nowchess/bot/ai/Evaluation.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala` +55 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/move/Move.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/board/BoardTest.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala` +35 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/main/scala/de/nowchess/api/move/Move.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/api/src/test/scala/de/nowchess/api/move/MoveTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala` +34 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala` +31 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +17 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala` +16 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/ZobristHash.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +16 more
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/NNUEBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +12 more
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/NNUEBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +5 more
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala` ← `modules/bot/src/test/scala/de/nowchess/bot/PolyglotHashTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/EngineTestHelpers.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEngineLoadGameTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEngineNotationTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEnginePromotionTest.scala` +5 more
|
||||
|
||||
---
|
||||
|
||||
_Generated by [codesight](https://github.com/Houseofmvps/codesight) — see your codebase clearly_
|
||||
@@ -0,0 +1,5 @@
|
||||
# Config
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `STOCKFISH_PATH` **required** — modules/bot/python/nnue.py
|
||||
@@ -0,0 +1,37 @@
|
||||
# Dependency Graph
|
||||
|
||||
## Most Imported Files (change these carefully)
|
||||
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala` — imported by **60** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/move/Move.scala` — imported by **40** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala` — imported by **39** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` — imported by **36** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala` — imported by **22** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` — imported by **21** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` — imported by **21** files
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala` — imported by **17** files
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala` — imported by **10** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala` — imported by **10** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/CastlingRights.scala` — imported by **8** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextImport.scala` — imported by **8** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotBook.scala` — imported by **5** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotDifficulty.scala` — imported by **5** files
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextExport.scala` — imported by **5** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala` — imported by **4** files
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala` — imported by **4** files
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/observer/Observer.scala` — imported by **4** files
|
||||
|
||||
## Import Map (who imports what)
|
||||
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala`, `modules/bot/src/main/scala/de/nowchess/bot/ai/Evaluation.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala` +55 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/move/Move.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/board/BoardTest.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala` +35 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/main/scala/de/nowchess/api/move/Move.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/api/src/test/scala/de/nowchess/api/move/MoveTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala` +34 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala` +31 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala` ← `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`, `modules/api/src/test/scala/de/nowchess/api/game/GameContextTest.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +17 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala` +16 more
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala`, `modules/bot/src/main/scala/de/nowchess/bot/util/ZobristHash.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +16 more
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/NNUEBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +12 more
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala` ← `modules/bot/src/main/scala/de/nowchess/bot/bots/ClassicalBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/HybridBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/bots/NNUEBot.scala`, `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`, `modules/bot/src/test/scala/de/nowchess/bot/AlphaBetaSearchTest.scala` +5 more
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala` ← `modules/bot/src/test/scala/de/nowchess/bot/PolyglotHashTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/EngineTestHelpers.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEngineLoadGameTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEngineNotationTest.scala`, `modules/core/src/test/scala/de/nowchess/chess/engine/GameEnginePromotionTest.scala` +5 more
|
||||
@@ -0,0 +1,266 @@
|
||||
# Libraries
|
||||
|
||||
- `jacoco-reporter/scoverage_coverage_gaps.py`
|
||||
- function parse_scoverage_xml: (xml_path) -> tuple[dict, list[ClassGap]]
|
||||
- function format_agent: (project_stats, classes) -> str
|
||||
- function format_json: (project_stats, classes) -> str
|
||||
- function format_markdown: (project_stats, classes) -> str
|
||||
- function format_module_gaps: (module_name, classes, stmt_pct) -> str
|
||||
- function run_scan_modules: (modules_dir, package_filter, min_coverage) -> None
|
||||
- _...4 more_
|
||||
- `jacoco-reporter/test_gaps.py`
|
||||
- function parse_suite_xml: (xml_path) -> SuiteResult
|
||||
- function load_module: (module_dir, results_subdir) -> Optional[ModuleResult]
|
||||
- function format_module: (mod) -> str
|
||||
- function run: (modules_dir, results_subdir, module_filter) -> None
|
||||
- function main: () -> None
|
||||
- class TestCase
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala`
|
||||
- class Board
|
||||
- function apply
|
||||
- function pieceAt
|
||||
- function updated
|
||||
- function removed
|
||||
- function withMove
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/CastlingRights.scala`
|
||||
- function hasAnyRights
|
||||
- function hasRights
|
||||
- function revokeColor
|
||||
- function revokeKingSide
|
||||
- function revokeQueenSide
|
||||
- class CastlingRights
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` — function opposite, function label
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` — class Piece
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala` — function label
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala`
|
||||
- class Square
|
||||
- function fromAlgebraic
|
||||
- function offset
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala`
|
||||
- function withBoard
|
||||
- function withTurn
|
||||
- function withCastlingRights
|
||||
- function withEnPassantSquare
|
||||
- function withHalfMoveClock
|
||||
- function withMove
|
||||
- _...2 more_
|
||||
- `modules/api/src/main/scala/de/nowchess/api/player/PlayerInfo.scala` — class PlayerId, function apply
|
||||
- `modules/api/src/main/scala/de/nowchess/api/response/ApiResponse.scala`
|
||||
- class ApiResponse
|
||||
- function error
|
||||
- function totalPages
|
||||
- `modules/bot/python/nnue.py`
|
||||
- function get_weights_dir: ()
|
||||
- function get_data_dir: ()
|
||||
- function list_checkpoints: ()
|
||||
- function migrate_legacy_data: ()
|
||||
- function show_header: ()
|
||||
- function show_checkpoints_table: ()
|
||||
- _...10 more_
|
||||
- `modules/bot/python/src/dataset.py`
|
||||
- function get_datasets_dir: () -> Path
|
||||
- function next_dataset_version: () -> int
|
||||
- function list_datasets: () -> List[Tuple[int, Dict]]
|
||||
- function load_dataset_metadata: (version) -> Optional[Dict]
|
||||
- function save_dataset_metadata: (version, metadata) -> None
|
||||
- function create_dataset: (version, labeled_jsonl_path, sources, stockfish_depth) -> Path
|
||||
- _...4 more_
|
||||
- `modules/bot/python/src/export.py` — function export_to_nbai: (weights_file, output_file, trained_by, train_loss)
|
||||
- `modules/bot/python/src/generate.py` — function play_random_game_and_collect_positions: (output_file, total_positions, samples_per_game, min_move, max_move, num_workers)
|
||||
- `modules/bot/python/src/label.py` — function normalize_evaluation: (cp_value, method, scale), function label_positions_with_stockfish: (positions_file, output_file, stockfish_path, batch_size, depth, verbose, normalize, num_workers)
|
||||
- `modules/bot/python/src/tactical_positions_extractor.py`
|
||||
- function download_and_extract_puzzle_db: (url, output_dir)
|
||||
- function extract_puzzle_positions: (puzzle_csv, max_puzzles) -> Set[str]
|
||||
- function load_positions_from_file: (file_path) -> Set[str]
|
||||
- function merge_positions: (tactical, other, output_file)
|
||||
- function extract_tactical_only: (puzzle_csv, output_file, max_puzzles) -> int
|
||||
- function interactive_merge_positions: (puzzle_csv, output_file, max_puzzles)
|
||||
- `modules/bot/python/src/train.py`
|
||||
- function fen_to_features: (fen)
|
||||
- function find_next_version: (base_name)
|
||||
- function save_metadata: (weights_file, metadata)
|
||||
- function train_nnue: (data_file, output_file, epochs, batch_size, lr, checkpoint, stockfish_depth, use_versioning, early_stopping_patience, weight_decay, subsample_ratio)
|
||||
- function burst_train: (data_file, output_file, duration_minutes, epochs_per_season, early_stopping_patience, batch_size, lr, initial_checkpoint, stockfish_depth, use_versioning, weight_decay, subsample_ratio)
|
||||
- class NNUEDataset
|
||||
- _...1 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Bot.scala`
|
||||
- class Bot
|
||||
- function name
|
||||
- function nextMove
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotController.scala`
|
||||
- class BotController
|
||||
- function getBot
|
||||
- function listBots
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/BotMoveRepetition.scala`
|
||||
- class BotMoveRepetition
|
||||
- function blockedMoves
|
||||
- function repeatedMove
|
||||
- function filterAllowed
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/Config.scala` — class Config
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/ai/Evaluation.scala`
|
||||
- class Evaluation
|
||||
- class CHECKMATE_SCORE
|
||||
- class DRAW_SCORE
|
||||
- function evaluate
|
||||
- function initAccumulator
|
||||
- function copyAccumulator
|
||||
- _...2 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/classic/EvaluationClassic.scala`
|
||||
- class EvaluationClassic
|
||||
- function evaluate
|
||||
- function countRay
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/EvaluationNNUE.scala` — class EvaluationNNUE, function evaluate
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NNUE.scala`
|
||||
- class NNUE
|
||||
- function initAccumulator
|
||||
- function pushAccumulator
|
||||
- function copyAccumulator
|
||||
- function recomputeAccumulator
|
||||
- function validateAccumulator
|
||||
- _...4 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiLoader.scala`
|
||||
- class NbaiLoader
|
||||
- function load
|
||||
- function loadDefault
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiMigrator.scala` — class NbaiMigrator, function migrateFromBin
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiModel.scala`
|
||||
- function toJson
|
||||
- class NbaiMetadata
|
||||
- function fromJson
|
||||
- function str
|
||||
- function num
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/bots/nnue/NbaiWriter.scala` — class NbaiWriter, function write
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/AlphaBetaSearch.scala`
|
||||
- function bestMove
|
||||
- function bestMove
|
||||
- function bestMoveWithTime
|
||||
- function bestMoveWithTime
|
||||
- function loop
|
||||
- function loop
|
||||
- _...2 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/MoveOrdering.scala`
|
||||
- class MoveOrdering
|
||||
- class OrderingContext
|
||||
- function addKillerMove
|
||||
- function getKillerMoves
|
||||
- function addHistory
|
||||
- function getHistory
|
||||
- _...3 more_
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/logic/TranspositionTable.scala`
|
||||
- function probe
|
||||
- function store
|
||||
- function clear
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotBook.scala` — function probe, function select
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/PolyglotHash.scala` — class PolyglotHash, function hash
|
||||
- `modules/bot/src/main/scala/de/nowchess/bot/util/ZobristHash.scala`
|
||||
- class ZobristHash
|
||||
- function hash
|
||||
- function nextHash
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/command/Command.scala`
|
||||
- class Command
|
||||
- function execute
|
||||
- function undo
|
||||
- function description
|
||||
- class MoveResult
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/command/CommandInvoker.scala`
|
||||
- class CommandInvoker
|
||||
- function execute
|
||||
- function undo
|
||||
- function redo
|
||||
- function history
|
||||
- function getCurrentIndex
|
||||
- _...3 more_
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/controller/Parser.scala` — class Parser, function parseMove
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/engine/GameEngine.scala`
|
||||
- class GameEngine
|
||||
- function isPendingPromotion
|
||||
- function board
|
||||
- function turn
|
||||
- function context
|
||||
- function canUndo
|
||||
- _...11 more_
|
||||
- `modules/core/src/main/scala/de/nowchess/chess/observer/Observer.scala`
|
||||
- function context
|
||||
- class Observer
|
||||
- function onGameEvent
|
||||
- class Observable
|
||||
- function subscribe
|
||||
- function unsubscribe
|
||||
- _...1 more_
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextExport.scala` — class GameContextExport, function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameContextImport.scala` — class GameContextImport, function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/GameFileService.scala`
|
||||
- class GameFileService
|
||||
- function saveGameToFile
|
||||
- function loadGameFromFile
|
||||
- class FileSystemGameService
|
||||
- function saveGameToFile
|
||||
- function loadGameFromFile
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenExporter.scala`
|
||||
- class FenExporter
|
||||
- function boardToFen
|
||||
- function gameContextToFen
|
||||
- function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParser.scala`
|
||||
- class FenParser
|
||||
- function parseFen
|
||||
- function importGameContext
|
||||
- function parseBoard
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserCombinators.scala`
|
||||
- class FenParserCombinators
|
||||
- function parseFen
|
||||
- function parseBoard
|
||||
- function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserFastParse.scala`
|
||||
- class FenParserFastParse
|
||||
- function parseFen
|
||||
- function parseBoard
|
||||
- function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/fen/FenParserSupport.scala` — function buildSquares
|
||||
- `modules/io/src/main/scala/de/nowchess/io/json/JsonExporter.scala` — class JsonExporter, function exportGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/json/JsonParser.scala` — class JsonParser, function importGameContext
|
||||
- `modules/io/src/main/scala/de/nowchess/io/pgn/PgnExporter.scala`
|
||||
- class PgnExporter
|
||||
- function exportGameContext
|
||||
- function exportGame
|
||||
- `modules/io/src/main/scala/de/nowchess/io/pgn/PgnParser.scala`
|
||||
- class PgnParser
|
||||
- function validatePgn
|
||||
- function importGameContext
|
||||
- function parsePgn
|
||||
- function parseAlgebraicMove
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/RuleSet.scala`
|
||||
- class RuleSet
|
||||
- function candidateMoves
|
||||
- function legalMoves
|
||||
- function allLegalMoves
|
||||
- function isCheck
|
||||
- function isCheckmate
|
||||
- _...4 more_
|
||||
- `modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala`
|
||||
- class DefaultRules
|
||||
- function loop
|
||||
- function toMoves
|
||||
- function loop
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/Main.scala` — class Main, function main
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/ChessBoardView.scala`
|
||||
- class ChessBoardView
|
||||
- function updateBoard
|
||||
- function updateUndoRedoButtons
|
||||
- function showMessage
|
||||
- function showPromotionDialog
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/ChessGUI.scala`
|
||||
- class ChessGUIApp
|
||||
- class ChessGUILauncher
|
||||
- function getEngine
|
||||
- function launch
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/GUIObserver.scala` — class GUIObserver
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/gui/PieceSprites.scala`
|
||||
- class PieceSprites
|
||||
- function loadPieceImage
|
||||
- class SquareColors
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/terminal/TerminalUI.scala` — class TerminalUI, function start
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/utils/PieceUnicode.scala` — function unicode
|
||||
- `modules/ui/src/main/scala/de/nowchess/ui/utils/Renderer.scala` — class Renderer, function render
|
||||
@@ -0,0 +1,4 @@
|
||||
# Middleware
|
||||
|
||||
## custom
|
||||
- generate — `modules/bot/python/src/generate.py`
|
||||
@@ -0,0 +1,44 @@
|
||||
# NowChessSystems — Wiki
|
||||
|
||||
_Generated 2026-04-12 — re-run `npx codesight --wiki` if the codebase has changed._
|
||||
|
||||
Structural map compiled from source code via AST. No LLM — deterministic, 200ms.
|
||||
|
||||
> **How to use safely:** These articles tell you WHERE things live and WHAT exists. They do not show full implementation logic. Always read the actual source files before implementing new features or making changes. Never infer how a function works from the wiki alone.
|
||||
|
||||
## Articles
|
||||
|
||||
- [Overview](./overview.md)
|
||||
|
||||
## Quick Stats
|
||||
|
||||
- Routes: **0**
|
||||
- Models: **0**
|
||||
- Components: **0**
|
||||
- Env vars: **0** required, **0** with defaults
|
||||
|
||||
## How to Use
|
||||
|
||||
- **New session:** read `index.md` (this file) for orientation — WHERE things are
|
||||
- **Architecture question:** read `overview.md` (~500 tokens)
|
||||
- **Domain question:** read the relevant article, then **read those source files**
|
||||
- **Database question:** read `database.md`, then read the actual schema files
|
||||
- **Before implementing anything:** read the source files listed in the article
|
||||
- **Full source context:** read `.codesight/CODESIGHT.md`
|
||||
|
||||
## What the Wiki Does Not Cover
|
||||
|
||||
These exist in your codebase but are **not** reflected in wiki articles:
|
||||
- Routes registered dynamically at runtime (loops, plugin factories, `app.use(dynamicRouter)`)
|
||||
- Internal routes from npm packages (e.g. Better Auth's built-in `/api/auth/*` endpoints)
|
||||
- WebSocket and SSE handlers
|
||||
- Raw SQL tables not declared through an ORM
|
||||
- Computed or virtual fields absent from schema declarations
|
||||
- TypeScript types that are not actual database columns
|
||||
- Routes marked `[inferred]` were detected via regex and may have lower precision
|
||||
- gRPC, tRPC, and GraphQL resolvers may be partially captured
|
||||
|
||||
When in doubt, search the source. The wiki is a starting point, not a complete inventory.
|
||||
|
||||
---
|
||||
_Last compiled: 2026-04-12 · 2 articles · [codesight](https://github.com/Houseofmvps/codesight)_
|
||||
@@ -0,0 +1,5 @@
|
||||
# Wiki Log
|
||||
|
||||
History of `npx codesight --wiki` runs. Capped at 20 entries.
|
||||
|
||||
## [2026-04-12 14:34:19] scan | 0 routes, 0 models, 0 components → 2 articles
|
||||
@@ -0,0 +1,19 @@
|
||||
# NowChessSystems — Overview
|
||||
|
||||
> **Navigation aid.** This article shows WHERE things live (routes, models, files). Read actual source files before implementing new features or making changes.
|
||||
|
||||
**NowChessSystems** is a scala project built with raw-http.
|
||||
|
||||
## High-Impact Files
|
||||
|
||||
Changes to these files have the widest blast radius across the codebase:
|
||||
|
||||
- `modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala` — imported by **28** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Square.scala` — imported by **21** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Color.scala` — imported by **19** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/move/Move.scala` — imported by **14** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Board.scala` — imported by **13** files
|
||||
- `modules/api/src/main/scala/de/nowchess/api/board/Piece.scala` — imported by **10** files
|
||||
|
||||
---
|
||||
_Back to [index.md](./index.md) · Generated 2026-04-12_
|
||||
@@ -0,0 +1,41 @@
|
||||
# Normalize text files in the repo
|
||||
* text=auto eol=lf
|
||||
|
||||
# Keep Windows command scripts in CRLF
|
||||
*.bat text eol=crlf
|
||||
*.cmd text eol=crlf
|
||||
|
||||
# Keep Unix shell scripts in LF
|
||||
*.sh text eol=lf
|
||||
|
||||
# Binary assets (no EOL normalization / textual diff)
|
||||
*.png binary
|
||||
*.jpg binary
|
||||
*.jpeg binary
|
||||
*.gif binary
|
||||
*.webp binary
|
||||
*.bmp binary
|
||||
*.ico binary
|
||||
|
||||
# ML / model / numeric artifacts
|
||||
*.bin binary
|
||||
*.pt binary
|
||||
*.pth binary
|
||||
*.onnx binary
|
||||
*.h5 binary
|
||||
*.hdf5 binary
|
||||
*.pb binary
|
||||
*.tflite binary
|
||||
*.npy binary
|
||||
*.npz binary
|
||||
*.safetensors binary
|
||||
|
||||
# Firmware / hex-like artifacts
|
||||
*.hex binary
|
||||
|
||||
# Packaged binaries
|
||||
*.jar binary
|
||||
*.zip binary
|
||||
*.7z binary
|
||||
*.gz binary
|
||||
|
||||
@@ -38,6 +38,8 @@ bin/
|
||||
|
||||
### VS Code ###
|
||||
.vscode/
|
||||
graphify-out/
|
||||
.graphify_*.json
|
||||
|
||||
### Mac OS ###
|
||||
.DS_Store
|
||||
|
||||
Generated
+2
@@ -8,3 +8,5 @@
|
||||
/dataSources.local.xml
|
||||
# Editor-based HTTP Client requests
|
||||
/httpRequests/
|
||||
|
||||
sonarlint.xml
|
||||
|
||||
Generated
+133
@@ -0,0 +1,133 @@
|
||||
<component name="ProjectCodeStyleConfiguration">
|
||||
<code_scheme name="Project" version="173">
|
||||
<AndroidXmlCodeStyleSettings>
|
||||
<option name="USE_CUSTOM_SETTINGS" value="true" />
|
||||
</AndroidXmlCodeStyleSettings>
|
||||
<JetCodeStyleSettings>
|
||||
<option name="CODE_STYLE_DEFAULTS" value="KOTLIN_OFFICIAL" />
|
||||
</JetCodeStyleSettings>
|
||||
<ScalaCodeStyleSettings>
|
||||
<option name="FORMATTER" value="1" />
|
||||
</ScalaCodeStyleSettings>
|
||||
<XML>
|
||||
<option name="XML_KEEP_LINE_BREAKS" value="false" />
|
||||
<option name="XML_ALIGN_ATTRIBUTES" value="false" />
|
||||
<option name="XML_SPACE_INSIDE_EMPTY_TAG" value="true" />
|
||||
</XML>
|
||||
<codeStyleSettings language="XML">
|
||||
<option name="FORCE_REARRANGE_MODE" value="1" />
|
||||
<indentOptions>
|
||||
<option name="CONTINUATION_INDENT_SIZE" value="4" />
|
||||
</indentOptions>
|
||||
<arrangement>
|
||||
<rules>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>xmlns:android</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>^$</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>xmlns:.*</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>^$</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
<order>BY_NAME</order>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>.*:id</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>.*:name</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>name</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>^$</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>style</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>^$</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>.*</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>^$</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
<order>BY_NAME</order>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>.*</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
</rule>
|
||||
</section>
|
||||
<section>
|
||||
<rule>
|
||||
<match>
|
||||
<AND>
|
||||
<NAME>.*</NAME>
|
||||
<XML_ATTRIBUTE />
|
||||
<XML_NAMESPACE>.*</XML_NAMESPACE>
|
||||
</AND>
|
||||
</match>
|
||||
<order>BY_NAME</order>
|
||||
</rule>
|
||||
</section>
|
||||
</rules>
|
||||
</arrangement>
|
||||
</codeStyleSettings>
|
||||
<codeStyleSettings language="kotlin">
|
||||
<option name="CODE_STYLE_DEFAULTS" value="KOTLIN_OFFICIAL" />
|
||||
</codeStyleSettings>
|
||||
</code_scheme>
|
||||
</component>
|
||||
Generated
+1
-1
@@ -1,5 +1,5 @@
|
||||
<component name="ProjectCodeStyleConfiguration">
|
||||
<state>
|
||||
<option name="PREFERRED_PROJECT_CODE_STYLE" value="Default" />
|
||||
<option name="USE_PER_PROJECT_SETTINGS" value="true" />
|
||||
</state>
|
||||
</component>
|
||||
Generated
+1
@@ -11,6 +11,7 @@
|
||||
<option value="$PROJECT_DIR$" />
|
||||
<option value="$PROJECT_DIR$/modules" />
|
||||
<option value="$PROJECT_DIR$/modules/api" />
|
||||
<option value="$PROJECT_DIR$/modules/bot" />
|
||||
<option value="$PROJECT_DIR$/modules/core" />
|
||||
<option value="$PROJECT_DIR$/modules/io" />
|
||||
<option value="$PROJECT_DIR$/modules/rule" />
|
||||
|
||||
Generated
+1
-1
@@ -5,7 +5,7 @@
|
||||
<option name="deprecationWarnings" value="true" />
|
||||
<option name="uncheckedWarnings" value="true" />
|
||||
</profile>
|
||||
<profile name="Gradle 2" modules="NowChessSystems.modules.core.main,NowChessSystems.modules.core.scoverage,NowChessSystems.modules.core.test,NowChessSystems.modules.io.main,NowChessSystems.modules.io.scoverage,NowChessSystems.modules.io.test,NowChessSystems.modules.rule.main,NowChessSystems.modules.rule.scoverage,NowChessSystems.modules.rule.test,NowChessSystems.modules.ui.main,NowChessSystems.modules.ui.scoverage,NowChessSystems.modules.ui.test">
|
||||
<profile name="Gradle 2" modules="NowChessSystems.modules.bot.main,NowChessSystems.modules.bot.scoverage,NowChessSystems.modules.bot.test,NowChessSystems.modules.core.main,NowChessSystems.modules.core.scoverage,NowChessSystems.modules.core.test,NowChessSystems.modules.io.main,NowChessSystems.modules.io.scoverage,NowChessSystems.modules.io.test,NowChessSystems.modules.rule.main,NowChessSystems.modules.rule.scoverage,NowChessSystems.modules.rule.test,NowChessSystems.modules.ui.main,NowChessSystems.modules.ui.scoverage,NowChessSystems.modules.ui.test">
|
||||
<option name="deprecationWarnings" value="true" />
|
||||
<option name="uncheckedWarnings" value="true" />
|
||||
<parameters>
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
rules = [
|
||||
DisableSyntax,
|
||||
LeakingImplicitClassVal,
|
||||
NoValInForComprehension,
|
||||
ProcedureSyntax,
|
||||
]
|
||||
|
||||
DisableSyntax.noVars = true
|
||||
DisableSyntax.noThrows = true
|
||||
DisableSyntax.noNulls = true
|
||||
DisableSyntax.noReturns = true
|
||||
DisableSyntax.noAsInstanceOf = true
|
||||
DisableSyntax.noIsInstanceOf = true
|
||||
DisableSyntax.noXml = true
|
||||
DisableSyntax.noFinalize = true
|
||||
@@ -0,0 +1,8 @@
|
||||
version = 3.8.1
|
||||
runner.dialect = scala3
|
||||
maxColumn = 120
|
||||
indent.main = 2
|
||||
align.preset = more
|
||||
trailingCommas = always
|
||||
rewrite.rules = [SortImports, RedundantBraces]
|
||||
rewrite.scala3.convertToNewSyntax = true
|
||||
@@ -0,0 +1,21 @@
|
||||
# Project Context
|
||||
|
||||
This is a scala project using raw-http.
|
||||
|
||||
Middleware includes: custom.
|
||||
|
||||
High-impact files (most imported, changes here affect many other files):
|
||||
- modules/api/src/main/scala/de/nowchess/api/game/GameContext.scala (imported by 50 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/board/Square.scala (imported by 33 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/board/Color.scala (imported by 30 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/move/Move.scala (imported by 29 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/board/Board.scala (imported by 19 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/board/PieceType.scala (imported by 18 files)
|
||||
- modules/rule/src/main/scala/de/nowchess/rules/sets/DefaultRules.scala (imported by 17 files)
|
||||
- modules/api/src/main/scala/de/nowchess/api/board/Piece.scala (imported by 15 files)
|
||||
|
||||
Required environment variables (no defaults):
|
||||
- STOCKFISH_PATH (modules/bot/python/nnue.py)
|
||||
|
||||
Read .codesight/wiki/index.md for orientation (WHERE things live). Then read actual source files before implementing. Wiki articles are navigation aids, not implementation guides.
|
||||
Read .codesight/CODESIGHT.md for the complete AI context map including all routes, schema, components, libraries, config, middleware, and dependency graph.
|
||||
@@ -1,7 +1,7 @@
|
||||
YOU CAN:
|
||||
- Edit and use the asset in any commercial or non commercial project
|
||||
- Use the asset in any commercial or non commercial project
|
||||
|
||||
YOU CAN'T:
|
||||
- Resell or distribute the asset to others
|
||||
YOU CAN:
|
||||
- Edit and use the asset in any commercial or non commercial project
|
||||
- Use the asset in any commercial or non commercial project
|
||||
|
||||
YOU CAN'T:
|
||||
- Resell or distribute the asset to others
|
||||
- Edit and resell the asset to others - - Credits required using This link: https://fatman200.itch.io/
|
||||
@@ -1,58 +1,99 @@
|
||||
# CLAUDE.md — NowChessSystems
|
||||
# Now-Chess
|
||||
|
||||
## Stack
|
||||
Scala 3.5.x · Quarkus + quarkus-scala3 · Hibernate/Jakarta · Lanterna TUI · K8s + ArgoCD + Kargo · Frontend TBD (Vite/React/Angular/Vue)
|
||||
|
||||
### Memory
|
||||
|
||||
Your memory is saved under .claude/memory/MEMORY.md.
|
||||
|
||||
## Structure
|
||||
```
|
||||
build.gradle.kts / settings.gradle.kts # root; include(":modules:<svc>") per service
|
||||
modules/<svc>/build.gradle.kts + src/
|
||||
docs/adr/ docs/api/ docs/unresolved.md
|
||||
```
|
||||
Versions in root `extra["VERSIONS"]`; modules read via `rootProject.extra["VERSIONS"] as Map<String,String>`.
|
||||
Scala 3.5.1 · Gradle 9
|
||||
|
||||
## Commands
|
||||
```bash
|
||||
./gradlew build
|
||||
./gradlew :modules:<svc>:build|test
|
||||
./gradlew :modules:<svc>:test --tests "de.nowchess.<svc>.<Class>"
|
||||
|
||||
```
|
||||
|
||||
## Workflow
|
||||
1. **Plan** — restate requirement, list files, flag risks. Proceed unless genuine ambiguity.
|
||||
2. **Tests first** — cover only new behaviour.
|
||||
3. **Implement** — no scope creep.
|
||||
4. **Verify** — check each requirement; confirm green build.
|
||||
|
||||
## Scala/Quarkus Rules
|
||||
- `given`/`using` only (no `implicit`); `Option`/`Either`/`Try` (no `null`/`.get`)
|
||||
- `jakarta.*` only; reactive I/O (`Uni`/`Multi`), no blocking on event loop
|
||||
- Always exclude `org.scala-lang:scala-library` from Quarkus BOM
|
||||
- Unit tests: `extends AnyFunSuite with Matchers` — no `@Test`, no `: Unit`
|
||||
- Integration tests: `@QuarkusTest` + JUnit 5 — `@Test` methods need explicit `: Unit`
|
||||
|
||||
## Coverage
|
||||
Line = 100% · Branch = 100% · Method = 100% · Regression tests · document exceptions
|
||||
Check: `jacoco-reporter/scoverage_coverage_gaps.py modules/{svc}/build/reports/scoverageTest/scoverage.xml`
|
||||
⚠️ Use `scoverageTest/`, NOT `scoverage/`.
|
||||
|
||||
## Bug Fixing
|
||||
Fix failures immediately without asking. After 3 failed attempts → log in `docs/unresolved.md` + surface summary.
|
||||
|
||||
## Agents (new service)
|
||||
Sequential: architect → scala-implementer → test-writer → gradle-builder → code-reviewer (review only, no self-fix)
|
||||
Parallel: only when services are fully independent (no shared contracts/state).
|
||||
|
||||
## Unresolved (`docs/unresolved.md`)
|
||||
Append only, never delete:
|
||||
```
|
||||
## [YYYY-MM-DD] <title>
|
||||
**Requirement/Bug:** **Root Cause:** **Attempted Fixes:** **Next Step:**
|
||||
./clean # Clear build dirs — only when necessary
|
||||
./compile # Compile all modules — always run
|
||||
./test # Run all tests
|
||||
./coverage # Check coverage
|
||||
```
|
||||
Try to stick to these commands for consistency.
|
||||
|
||||
## Done Checklist
|
||||
- [ ] Plan written · files created/modified · tests green · requirements verified · unresolved logged
|
||||
## Modules
|
||||
|
||||
| Module | Role | Depends on |
|
||||
|--------|------|-----------|
|
||||
| `api` | Model / shared types | (none) |
|
||||
| `core` | Primary business logic | api, rule |
|
||||
| `rule` | Game rules | api |
|
||||
| `bot` | Bots and AI | api,rule,io |
|
||||
| `io` | Export formats | api, core |
|
||||
| `ui` | Entrypoint & UI | core, io |
|
||||
|
||||
## Style
|
||||
|
||||
- Use immutable data and pure functions.
|
||||
- Keep functions under 30 lines. If you need "and" to describe it, split it.
|
||||
- Keep cyclomatic complexity under 15.
|
||||
- Avoid comments. Let names carry intent; comment only non-obvious algorithms.
|
||||
- Scan for duplicated logic before finishing. Extract it.
|
||||
- Follow default Sonar style for Scala.
|
||||
- Use `Option` or `Either` for fallible operations; avoid exceptions for control flow.
|
||||
- Naming: types are PascalCase, functions/values are camelCase.
|
||||
|
||||
## Code Quality
|
||||
|
||||
- **Coverage:** 100% condition coverage required in `api`, `core`, `rule`, `io` (mandatory); `ui` exempt.
|
||||
|
||||
### Linters
|
||||
|
||||
- **scalafmt** — enforces formatting; run `./gradlew spotlessScalaCheck` to check and `./gradlew spotlessScalaApply` to refactor.
|
||||
- **scalafix** — enforces style and detects unused imports/code; run `./gradlew scalafix` to apply rules.
|
||||
|
||||
## Architecture Decisions
|
||||
|
||||
- **Immutable state as primary model:** GameContext (api) holds board, history, player state — immutable, passed through the system. Each move creates a new GameContext, enabling undo/redo without side effects.
|
||||
- **Observer pattern for UI decoupling:** GameEngine publishes move/state events; CommandInvoker queues moves; UI listens to events, not polling. GameEngine never imports UI code.
|
||||
- **RuleSet trait encapsulates rules:** Move generation, check, castling, en passant all in RuleSet impl. GameEngine calls rules as a black box; rules don't know about the rest of core.
|
||||
- **Polyglot hash must follow spec index layout:** piece keys use interleaved mapping `(pieceType * 2 + colorBit)` with black=0/white=1, castling keys are `768..771`, en-passant file keys are `772..779` and are XORed only if side-to-move has a pawn that can capture en passant, side-to-move key is `780` for white.
|
||||
- **Alpha-beta uses sequential PV search by default:** parallel split was disabled because fixed-window futures removed pruning effectiveness; correctness and pruning quality take priority over speculative parallelism.
|
||||
- **Search hash is updated incrementally per move:** bot search now updates Zobrist keys from parent hash with move deltas instead of recomputing piece scans at every node.
|
||||
|
||||
## Rules
|
||||
|
||||
- **Tests are the spec.** Never modify tests to pass; modify requirements or code. Update tests only if requirements change.
|
||||
- Never read build folders. Ask permission if needed.
|
||||
- Keep this file up to date with any important decisions or conventions.
|
||||
|
||||
---
|
||||
|
||||
## Instructions for Claude Code
|
||||
|
||||
### Two-Step Rule (mandatory)
|
||||
**Step 1 — Orient:** Use wiki articles to find WHERE things live.
|
||||
**Step 2 — Verify:** Read the actual source files listed in the wiki article BEFORE writing any code.
|
||||
|
||||
Wiki articles are structural summaries extracted by AST. They show routes, models, and file locations.
|
||||
They do NOT show full function logic, middleware internals, or dynamic runtime behavior.
|
||||
**Never write or modify code based solely on wiki content — always read source files first.**
|
||||
|
||||
Read in order at session start:
|
||||
1. `.codesight/wiki/index.md` — orientation map (~200 tokens)
|
||||
2. `.codesight/wiki/overview.md` — architecture overview (~500 tokens)
|
||||
3. Domain article (e.g. `.codesight/wiki/auth.md`) → check "Source Files" section → read those files
|
||||
4. `.codesight/CODESIGHT.md` — full context map for deep exploration
|
||||
|
||||
Routes marked `[inferred]` in wiki articles were detected via regex — verify against source before trusting.
|
||||
If any source file shows ⚠ in the wiki, re-run `codesight --wiki` before proceeding.
|
||||
|
||||
Or use the codesight MCP server for on-demand queries:
|
||||
- `codesight_get_wiki_article` — read a specific wiki article by name
|
||||
- `codesight_get_wiki_index` — get the wiki index
|
||||
- `codesight_get_summary` — quick project overview
|
||||
- `codesight_get_routes --prefix /api/users` — filtered routes
|
||||
- `codesight_get_blast_radius --file src/lib/db.ts` — impact analysis before changes
|
||||
- `codesight_get_schema --model users` — specific model details
|
||||
|
||||
Only open specific files after consulting codesight context. This saves ~16.893 tokens per conversation.
|
||||
|
||||
## graphify
|
||||
|
||||
This project has a graphify knowledge graph at graphify-out/.
|
||||
|
||||
Rules:
|
||||
- Before answering architecture or codebase questions, read graphify-out/GRAPH_REPORT.md for god nodes and community structure
|
||||
- If graphify-out/wiki/index.md exists, navigate it instead of reading raw files
|
||||
- After modifying code files in this session, run `python3 -c "from graphify.watch import _rebuild_code; from pathlib import Path; _rebuild_code(Path('.'))"` to keep the graph current
|
||||
|
||||
+52
-1
@@ -1,6 +1,8 @@
|
||||
plugins {
|
||||
id("org.sonarqube") version "7.2.3.7755"
|
||||
id("org.scoverage") version "8.1" apply false
|
||||
id("com.diffplug.spotless") version "8.4.0" apply false
|
||||
id("io.github.cosmicsilence.scalafix") version "0.2.6" apply false
|
||||
}
|
||||
|
||||
group = "de.nowchess"
|
||||
@@ -20,6 +22,26 @@ sonar {
|
||||
}.joinToString(",")
|
||||
|
||||
property("sonar.scala.coverage.reportPaths", scoverageReports)
|
||||
property(
|
||||
"sonar.coverage.exclusions",
|
||||
// UI renders JavaFX components; headless test environments cannot exercise rendering paths
|
||||
"modules/ui/**," +
|
||||
// FastParse macro-generated combinators produce synthetic branches that scoverage marks as uncovered
|
||||
"modules/io/src/main/scala/de/nowchess/io/fen/FenParserFastParse*," +
|
||||
// NNUE inference pipeline — coverage requires a trained model file not present in CI
|
||||
"**/bot/**/NNUE.scala," +
|
||||
"**/bot/**/NNUEBot.scala," +
|
||||
"**/bot/**/EvaluationNNUE.scala," +
|
||||
// NBAI binary format loader/writer — error paths require crafted corrupt files; migrator is a one-shot tool
|
||||
"**/bot/**/NbaiLoader.scala," +
|
||||
"**/bot/**/NbaiModel.scala," +
|
||||
"**/bot/**/NbaiMigrator.scala," +
|
||||
"**/bot/**/NbaiWriter.scala," +
|
||||
// PolyglotBook — binary I/O and dead-code guards (bit-masked fields can never exceed valid range)
|
||||
"**/bot/**/PolyglotBook.scala," +
|
||||
"**/bot/**/MoveOrdering.scala," +
|
||||
"**/bot/**/AlphaBetaSearch.scala"
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,7 +54,36 @@ val versions = mapOf(
|
||||
"SCOVERAGE" to "2.1.1",
|
||||
"SCALAFX" to "21.0.0-R32",
|
||||
"JAVAFX" to "21.0.1",
|
||||
"JUNIT_BOM" to "5.13.4"
|
||||
"JUNIT_BOM" to "5.13.4",
|
||||
"ONNXRUNTIME" to "1.19.2",
|
||||
"SCALA_PARSER_COMBINATORS" to "2.4.0",
|
||||
"FASTPARSE" to "3.0.2",
|
||||
"JACKSON" to "2.17.2",
|
||||
"JACKSON_SCALA" to "2.17.2"
|
||||
)
|
||||
extra["VERSIONS"] = versions
|
||||
|
||||
subprojects {
|
||||
apply(plugin = "com.diffplug.spotless")
|
||||
|
||||
pluginManager.withPlugin("scala") {
|
||||
configure<com.diffplug.gradle.spotless.SpotlessExtension> {
|
||||
scala {
|
||||
scalafmt().configFile(rootProject.file(".scalafmt.conf"))
|
||||
}
|
||||
}
|
||||
|
||||
apply(plugin = "io.github.cosmicsilence.scalafix")
|
||||
configure<io.github.cosmicsilence.scalafix.ScalafixExtension> {
|
||||
configFile.set(rootProject.file(".scalafix.conf"))
|
||||
}
|
||||
|
||||
// Disable SemanticDB config for the scoverage source set — it sets -sourceroot to
|
||||
// the root project dir, which conflicts with scoverage's own -sourceroot and causes
|
||||
// reportTestScoverage to fail with "No source root found".
|
||||
tasks.matching { it.name in setOf("configSemanticDBScoverage", "checkScalafixScoverage", "checkScalafixTest") }.configureEach {
|
||||
enabled = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,776 @@
|
||||
openapi: 3.0.3
|
||||
info:
|
||||
title: NowChess API
|
||||
description: |
|
||||
REST API for the NowChess application. Designed to feel familiar to users
|
||||
of the [lichess API](https://lichess.org/api).
|
||||
|
||||
## Authentication
|
||||
Most endpoints require a Bearer token:
|
||||
```
|
||||
Authorization: Bearer <token>
|
||||
```
|
||||
Authentication is reserved for future implementation — endpoints are currently
|
||||
open unless noted otherwise.
|
||||
|
||||
## Move notation
|
||||
Moves are expressed in **UCI notation**: `{from}{to}[promotion]`
|
||||
- Normal move: `e2e4`
|
||||
- Capture: `d5e6`
|
||||
- Promotion: `e7e8q` (q=queen, r=rook, b=bishop, n=knight)
|
||||
- Castling: `e1g1` (kingside white), `e1c1` (queenside white)
|
||||
|
||||
## Streaming
|
||||
Endpoints that support streaming return **NDJSON** (newline-delimited JSON).
|
||||
Request them with:
|
||||
```
|
||||
Accept: application/x-ndjson
|
||||
```
|
||||
Each line of the response is a complete JSON object. Empty lines are
|
||||
keep-alive heartbeats.
|
||||
|
||||
## Rate limiting
|
||||
Requests that exceed the rate limit receive `429 Too Many Requests`.
|
||||
Honour the `Retry-After` response header and wait before retrying.
|
||||
version: 1.0.0
|
||||
contact:
|
||||
name: NowChess
|
||||
license:
|
||||
name: MIT
|
||||
|
||||
servers:
|
||||
- url: http://localhost:8080
|
||||
description: Local development server
|
||||
|
||||
tags:
|
||||
- name: game
|
||||
description: Create and manage chess games
|
||||
- name: move
|
||||
description: Make moves and navigate game history
|
||||
- name: draw
|
||||
description: Draw offers and claims
|
||||
- name: import
|
||||
description: Load a game from FEN or PGN
|
||||
- name: export
|
||||
description: Export a game as FEN or PGN
|
||||
|
||||
paths:
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Game lifecycle
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
/api/board/game:
|
||||
post:
|
||||
operationId: createGame
|
||||
tags: [game]
|
||||
summary: Create a new game
|
||||
description: |
|
||||
Creates a new chess game starting from the initial position.
|
||||
Returns the full game state including the generated `gameId`.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
requestBody:
|
||||
required: false
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/CreateGameRequest'
|
||||
responses:
|
||||
'201':
|
||||
description: Game created
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameFull'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}:
|
||||
get:
|
||||
operationId: getGame
|
||||
tags: [game]
|
||||
summary: Get game state
|
||||
description: Returns the full current state of a game.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: Current game state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameFull'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/stream:
|
||||
get:
|
||||
operationId: streamGame
|
||||
tags: [game]
|
||||
summary: Stream game events
|
||||
description: |
|
||||
Opens a persistent NDJSON stream for a game. The first object sent is
|
||||
a `gameFull` event containing the complete game state. Subsequent
|
||||
objects are `gameState` events sent whenever the game changes (move
|
||||
made, draw offered, game over, etc.).
|
||||
|
||||
Empty lines are heartbeats to keep the connection alive.
|
||||
|
||||
Connect with:
|
||||
```
|
||||
Accept: application/x-ndjson
|
||||
```
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: NDJSON event stream
|
||||
content:
|
||||
application/x-ndjson:
|
||||
schema:
|
||||
oneOf:
|
||||
- $ref: '#/components/schemas/GameFullEvent'
|
||||
- $ref: '#/components/schemas/GameStateEvent'
|
||||
- $ref: '#/components/schemas/ErrorEvent'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/resign:
|
||||
post:
|
||||
operationId: resignGame
|
||||
tags: [game]
|
||||
summary: Resign the game
|
||||
description: The active player resigns. The game ends immediately.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: Resignation accepted
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/OkResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Move-making
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
/api/board/game/{gameId}/move/{uci}:
|
||||
post:
|
||||
operationId: makeMove
|
||||
tags: [move]
|
||||
summary: Make a move
|
||||
description: |
|
||||
Submit a move in UCI notation. The move must be legal for the side
|
||||
currently to move.
|
||||
|
||||
For promotion moves include the target piece as the fifth character:
|
||||
`e7e8q`, `a2a1r`, etc.
|
||||
|
||||
If the move results in a pawn reaching the back rank and no promotion
|
||||
character is supplied, the game enters `promotionPending` status and
|
||||
the move is not yet applied — resubmit with the promotion character.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
- name: uci
|
||||
in: path
|
||||
required: true
|
||||
description: Move in UCI notation (e.g. `e2e4`, `e7e8q`)
|
||||
schema:
|
||||
type: string
|
||||
pattern: '^[a-h][1-8][a-h][1-8][qrbn]?$'
|
||||
example: e2e4
|
||||
responses:
|
||||
'200':
|
||||
description: Move applied — returns updated game state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameState'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/moves:
|
||||
get:
|
||||
operationId: getLegalMoves
|
||||
tags: [move]
|
||||
summary: Get legal moves
|
||||
description: |
|
||||
Returns all legal moves for the side currently to move.
|
||||
Optionally filter to moves originating from a single square.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
- name: square
|
||||
in: query
|
||||
required: false
|
||||
description: Filter to moves from this square (e.g. `e2`)
|
||||
schema:
|
||||
type: string
|
||||
pattern: '^[a-h][1-8]$'
|
||||
example: e2
|
||||
responses:
|
||||
'200':
|
||||
description: List of legal moves
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/LegalMovesResponse'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/undo:
|
||||
post:
|
||||
operationId: undoMove
|
||||
tags: [move]
|
||||
summary: Undo the last move
|
||||
description: Reverts the most recent move. Returns the updated game state.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: Move undone
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameState'
|
||||
'400':
|
||||
description: No moves to undo
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/redo:
|
||||
post:
|
||||
operationId: redoMove
|
||||
tags: [move]
|
||||
summary: Redo a previously undone move
|
||||
description: Re-applies the next move in the undo stack. Returns the updated game state.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: Move redone
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameState'
|
||||
'400':
|
||||
description: No moves to redo
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Draw handling
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
/api/board/game/{gameId}/draw/{action}:
|
||||
post:
|
||||
operationId: drawAction
|
||||
tags: [draw]
|
||||
summary: Offer, accept, decline, or claim a draw
|
||||
description: |
|
||||
Perform a draw-related action:
|
||||
|
||||
| Action | Description |
|
||||
|-----------|-------------|
|
||||
| `offer` | Offer a draw to the opponent |
|
||||
| `accept` | Accept the opponent's draw offer |
|
||||
| `decline` | Decline the opponent's draw offer |
|
||||
| `claim` | Claim a draw under the fifty-move rule (only valid when `status` is `fiftyMoveAvailable`) |
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
- name: action
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
enum: [offer, accept, decline, claim]
|
||||
responses:
|
||||
'200':
|
||||
description: Action accepted
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/OkResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Import
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
/api/board/game/import/fen:
|
||||
post:
|
||||
operationId: importFen
|
||||
tags: [import]
|
||||
summary: Load a position from FEN
|
||||
description: |
|
||||
Creates a new game from a FEN string. The game starts at the position
|
||||
described by the FEN; move history prior to that position is not
|
||||
available.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ImportFenRequest'
|
||||
responses:
|
||||
'201':
|
||||
description: Game created from FEN
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameFull'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/import/pgn:
|
||||
post:
|
||||
operationId: importPgn
|
||||
tags: [import]
|
||||
summary: Load a game from PGN
|
||||
description: |
|
||||
Creates a new game by replaying all moves in a PGN string. The game
|
||||
starts at the position after the final move in the PGN; undo is
|
||||
available for every replayed move.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ImportPgnRequest'
|
||||
responses:
|
||||
'201':
|
||||
description: Game created from PGN
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GameFull'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Export
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
/api/board/game/{gameId}/export/fen:
|
||||
get:
|
||||
operationId: exportFen
|
||||
tags: [export]
|
||||
summary: Export current position as FEN
|
||||
description: Returns the FEN string representing the current board position.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: FEN string
|
||||
content:
|
||||
text/plain:
|
||||
schema:
|
||||
type: string
|
||||
example: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
/api/board/game/{gameId}/export/pgn:
|
||||
get:
|
||||
operationId: exportPgn
|
||||
tags: [export]
|
||||
summary: Export game as PGN
|
||||
description: Returns the full PGN for the game including headers and move text.
|
||||
security:
|
||||
- bearerAuth: []
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/gameId'
|
||||
responses:
|
||||
'200':
|
||||
description: PGN text
|
||||
content:
|
||||
application/x-chess-pgn:
|
||||
schema:
|
||||
type: string
|
||||
example: |
|
||||
[Event "NowChess game"]
|
||||
[White "Player1"]
|
||||
[Black "Player2"]
|
||||
[Result "*"]
|
||||
|
||||
1. e4 e5 2. Nf3 *
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'429':
|
||||
$ref: '#/components/responses/TooManyRequests'
|
||||
|
||||
# =============================================================================
|
||||
# Components
|
||||
# =============================================================================
|
||||
|
||||
components:
|
||||
|
||||
securitySchemes:
|
||||
bearerAuth:
|
||||
type: http
|
||||
scheme: bearer
|
||||
description: 'Personal access token — `Authorization: Bearer <token>`'
|
||||
|
||||
parameters:
|
||||
gameId:
|
||||
name: gameId
|
||||
in: path
|
||||
required: true
|
||||
description: 8-character alphanumeric game ID (e.g. `Qa7FJNk2`)
|
||||
schema:
|
||||
type: string
|
||||
pattern: '^[A-Za-z0-9]{8}$'
|
||||
example: Qa7FJNk2
|
||||
|
||||
responses:
|
||||
BadRequest:
|
||||
description: Invalid input
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
Unauthorized:
|
||||
description: Missing or invalid authentication token
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
NotFound:
|
||||
description: Game not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
TooManyRequests:
|
||||
description: Rate limit exceeded — see `Retry-After` header
|
||||
headers:
|
||||
Retry-After:
|
||||
description: Seconds to wait before retrying
|
||||
schema:
|
||||
type: integer
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
|
||||
schemas:
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Requests
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
CreateGameRequest:
|
||||
type: object
|
||||
description: Parameters for creating a new game. All fields are optional.
|
||||
properties:
|
||||
white:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
black:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
|
||||
ImportFenRequest:
|
||||
type: object
|
||||
required: [fen]
|
||||
properties:
|
||||
fen:
|
||||
type: string
|
||||
description: Complete FEN string (6 fields)
|
||||
example: rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1
|
||||
white:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
black:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
|
||||
ImportPgnRequest:
|
||||
type: object
|
||||
required: [pgn]
|
||||
properties:
|
||||
pgn:
|
||||
type: string
|
||||
description: PGN text (headers and move list)
|
||||
example: "1. e4 e5 2. Nf3 Nc6 *"
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Game state
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
GameFull:
|
||||
type: object
|
||||
description: Complete game information including players and current state.
|
||||
required: [gameId, white, black, state]
|
||||
properties:
|
||||
gameId:
|
||||
type: string
|
||||
description: Unique 8-character game identifier
|
||||
example: Qa7FJNk2
|
||||
white:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
black:
|
||||
$ref: '#/components/schemas/PlayerInfo'
|
||||
state:
|
||||
$ref: '#/components/schemas/GameState'
|
||||
|
||||
GameState:
|
||||
type: object
|
||||
description: |
|
||||
The current game state. Included in `GameFull` and returned by move
|
||||
endpoints and stream events.
|
||||
required: [fen, pgn, turn, status, moves, undoAvailable, redoAvailable]
|
||||
properties:
|
||||
fen:
|
||||
type: string
|
||||
description: FEN string for the current position
|
||||
example: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1
|
||||
pgn:
|
||||
type: string
|
||||
description: PGN move text for the full game so far
|
||||
example: "1. e4"
|
||||
turn:
|
||||
type: string
|
||||
enum: [white, black]
|
||||
description: The side to move
|
||||
status:
|
||||
$ref: '#/components/schemas/GameStatus'
|
||||
winner:
|
||||
type: string
|
||||
enum: [white, black]
|
||||
description: Set when `status` is `checkmate` or `resign`
|
||||
nullable: true
|
||||
moves:
|
||||
type: array
|
||||
description: All moves played so far, in UCI notation
|
||||
items:
|
||||
type: string
|
||||
example: [e2e4, e7e5, g1f3]
|
||||
undoAvailable:
|
||||
type: boolean
|
||||
description: Whether `POST /undo` is currently valid
|
||||
redoAvailable:
|
||||
type: boolean
|
||||
description: Whether `POST /redo` is currently valid
|
||||
|
||||
GameStatus:
|
||||
type: string
|
||||
description: |
|
||||
Current game status:
|
||||
|
||||
| Value | Meaning |
|
||||
|-------|---------|
|
||||
| `started` | Game in progress, no special condition |
|
||||
| `check` | Side to move is in check |
|
||||
| `checkmate` | Side to move is checkmated — game over |
|
||||
| `stalemate` | Side to move has no legal moves, not in check — game over (draw) |
|
||||
| `resign` | A player resigned — game over |
|
||||
| `draw` | Draw agreed or claimed — game over |
|
||||
| `drawOffered` | Waiting for the opponent to accept or decline a draw offer |
|
||||
| `fiftyMoveAvailable` | Fifty-move rule threshold reached; active player may claim draw |
|
||||
| `promotionPending` | A pawn reached the back rank; awaiting promotion piece selection |
|
||||
| `insufficientMaterial` | Neither side has enough pieces to deliver checkmate — game over (draw) |
|
||||
enum:
|
||||
- started
|
||||
- check
|
||||
- checkmate
|
||||
- stalemate
|
||||
- resign
|
||||
- draw
|
||||
- drawOffered
|
||||
- fiftyMoveAvailable
|
||||
- promotionPending
|
||||
- insufficientMaterial
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Moves
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
LegalMovesResponse:
|
||||
type: object
|
||||
required: [moves]
|
||||
properties:
|
||||
moves:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/LegalMove'
|
||||
|
||||
LegalMove:
|
||||
type: object
|
||||
required: [from, to, uci, moveType]
|
||||
properties:
|
||||
from:
|
||||
type: string
|
||||
description: Origin square in algebraic notation
|
||||
example: e2
|
||||
to:
|
||||
type: string
|
||||
description: Destination square in algebraic notation
|
||||
example: e4
|
||||
uci:
|
||||
type: string
|
||||
description: Full move in UCI notation
|
||||
example: e2e4
|
||||
moveType:
|
||||
$ref: '#/components/schemas/MoveType'
|
||||
promotion:
|
||||
type: string
|
||||
enum: [queen, rook, bishop, knight]
|
||||
description: Target piece for promotion moves
|
||||
nullable: true
|
||||
|
||||
MoveType:
|
||||
type: string
|
||||
description: Classification of the move
|
||||
enum:
|
||||
- normal
|
||||
- capture
|
||||
- castleKingside
|
||||
- castleQueenside
|
||||
- enPassant
|
||||
- promotion
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Streaming events
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
GameFullEvent:
|
||||
type: object
|
||||
description: |
|
||||
First event on a game stream. Contains the complete game snapshot.
|
||||
required: [type, game]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [gameFull]
|
||||
game:
|
||||
$ref: '#/components/schemas/GameFull'
|
||||
|
||||
GameStateEvent:
|
||||
type: object
|
||||
description: |
|
||||
Emitted on a game stream whenever the game state changes (move played,
|
||||
draw offered, game over, etc.).
|
||||
required: [type, state]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [gameState]
|
||||
state:
|
||||
$ref: '#/components/schemas/GameState'
|
||||
|
||||
ErrorEvent:
|
||||
type: object
|
||||
description: Emitted on a game stream when an error occurs.
|
||||
required: [type, error]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [error]
|
||||
error:
|
||||
$ref: '#/components/schemas/ApiError'
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Shared types
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
PlayerInfo:
|
||||
type: object
|
||||
required: [id, displayName]
|
||||
properties:
|
||||
id:
|
||||
type: string
|
||||
description: Unique player identifier
|
||||
example: player1
|
||||
displayName:
|
||||
type: string
|
||||
description: Human-readable display name
|
||||
example: Alice
|
||||
|
||||
OkResponse:
|
||||
type: object
|
||||
required: [ok]
|
||||
properties:
|
||||
ok:
|
||||
type: boolean
|
||||
enum: [true]
|
||||
|
||||
ApiError:
|
||||
type: object
|
||||
required: [code, message]
|
||||
properties:
|
||||
code:
|
||||
type: string
|
||||
description: Machine-readable error code
|
||||
example: INVALID_MOVE
|
||||
message:
|
||||
type: string
|
||||
description: Human-readable error description
|
||||
example: e2e5 is not a legal move
|
||||
field:
|
||||
type: string
|
||||
description: Request field that caused the error, if applicable
|
||||
example: uci
|
||||
nullable: true
|
||||
@@ -1,20 +0,0 @@
|
||||
## [2026-03-31] Unreachable code blocking 100% statement coverage
|
||||
|
||||
**Requirement/Bug:** Reach 100% statement coverage in core module.
|
||||
|
||||
**Root Cause:** 4 remaining uncovered statements (99.6% coverage) are unreachable code:
|
||||
1. **PgnParser.scala:160** (`case _ => None` in extractPromotion) - Regex `=([QRBN])` only matches those 4 characters; fallback case can never execute
|
||||
2. **GameHistory.scala:29** (`addMove$default$4` compiler-generated method) - Method overload 3 without defaults shadows the 4-param version, making promotionPiece default accessor unreachable
|
||||
3. **GameEngine.scala:201-202** (`case _` in completePromotion) - GameController.completePromotion always returns one of 4 expected MoveResult types; catch-all is defensive code
|
||||
|
||||
**Attempted Fixes:**
|
||||
1. Added comprehensive PGN parsing tests (all 4 promotion types) - PgnParser improved from 95.8% to 99.4%
|
||||
2. Added GameHistory tests using named parameters - hit `addMove$default$3` (castleSide) but not `$default$4` (promotionPiece)
|
||||
3. Named parameter approach: `addMove(from=..., to=..., promotionPiece=...)` triggers 4-param with castleSide default ✓
|
||||
4. Positional approach: `addMove(f, t, None, None)` requires all 4 args (explicit, no defaults used) - doesn't hit $default$4
|
||||
5. Root issue: Scala's overload resolution prefers more-specific non-default overloads (2-param, 3-param) over the 4-param with defaults
|
||||
|
||||
**Recommendation:** 99.6% (1029/1033) is maximum achievable without refactoring method overloads. Unreachable code design patterns:
|
||||
- **Pattern 1 (unreachable regex fallback):** Defensive pattern match against exhaustive regex
|
||||
- **Pattern 2 (overshadowed defaults):** Method overloads shadow default parameters in parent signature
|
||||
- **Pattern 3 (defensive catch-all):** Error handling for impossible external API returns
|
||||
@@ -1,5 +1,5 @@
|
||||
import glob,re
|
||||
mods=['api','core','io','rule','ui']
|
||||
mods=['api','core','io','rule','ui', 'bot']
|
||||
tot=0
|
||||
for m in mods:
|
||||
s=0
|
||||
|
||||
@@ -16,3 +16,38 @@
|
||||
### Features
|
||||
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
## (2026-04-07)
|
||||
|
||||
### Features
|
||||
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
## (2026-04-12)
|
||||
|
||||
### Features
|
||||
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
* NCS-25 Add linters to keep quality up ([#27](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/27)) ([fd4e67d](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/fd4e67d4f782a7e955822d90cb909d0a81676fb2))
|
||||
## (2026-04-14)
|
||||
|
||||
### Features
|
||||
|
||||
* NCS-14 implemented insufficient moves rule ([#30](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/30)) ([b0399a4](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/b0399a4e489950083066c9538df9a84dcc7a4613))
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
* NCS-25 Add linters to keep quality up ([#27](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/27)) ([fd4e67d](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/fd4e67d4f782a7e955822d90cb909d0a81676fb2))
|
||||
## (2026-04-16)
|
||||
|
||||
### Features
|
||||
|
||||
* NCS-13 Implement Threefold Repetition ([#31](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/31)) ([767d305](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/767d3051a76c266050b6335774d66e2db2273c16))
|
||||
* NCS-14 implemented insufficient moves rule ([#30](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/30)) ([b0399a4](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/b0399a4e489950083066c9538df9a84dcc7a4613))
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
* NCS-25 Add linters to keep quality up ([#27](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/27)) ([fd4e67d](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/fd4e67d4f782a7e955822d90cb909d0a81676fb2))
|
||||
## (2026-04-19)
|
||||
|
||||
### Features
|
||||
|
||||
* NCS-13 Implement Threefold Repetition ([#31](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/31)) ([767d305](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/767d3051a76c266050b6335774d66e2db2273c16))
|
||||
* NCS-14 implemented insufficient moves rule ([#30](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/30)) ([b0399a4](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/b0399a4e489950083066c9538df9a84dcc7a4613))
|
||||
* NCS-21 Write Scripts to automate certain tasks ([#15](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/15)) ([8051871](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/80518719d536a087d339fe02530825dc07f8b388))
|
||||
* NCS-25 Add linters to keep quality up ([#27](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/27)) ([fd4e67d](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/fd4e67d4f782a7e955822d90cb909d0a81676fb2))
|
||||
* NCS-41 Bot Platform ([#33](https://git.janis-eccarius.de/NowChess/NowChessSystems/issues/33)) ([dceab08](https://git.janis-eccarius.de/NowChess/NowChessSystems/commit/dceab0875e6d15f7d3958633cf5dd5b29a851b1d))
|
||||
|
||||
@@ -7,11 +7,11 @@ object Board:
|
||||
def apply(pieces: Map[Square, Piece]): Board = pieces
|
||||
|
||||
extension (b: Board)
|
||||
def pieceAt(sq: Square): Option[Piece] = b.get(sq)
|
||||
def pieceAt(sq: Square): Option[Piece] = b.get(sq)
|
||||
def updated(sq: Square, piece: Piece): Board = b.updated(sq, piece)
|
||||
def removed(sq: Square): Board = b.removed(sq)
|
||||
def removed(sq: Square): Board = b.removed(sq)
|
||||
def withMove(from: Square, to: Square): (Board, Option[Piece]) =
|
||||
val captured = b.get(to)
|
||||
val captured = b.get(to)
|
||||
val updatedBoard = b.removed(from).updated(to, b(from))
|
||||
(updatedBoard, captured)
|
||||
def applyMove(move: de.nowchess.api.move.Move): Board =
|
||||
@@ -21,8 +21,14 @@ object Board:
|
||||
|
||||
val initial: Board =
|
||||
val backRank: Vector[PieceType] = Vector(
|
||||
PieceType.Rook, PieceType.Knight, PieceType.Bishop, PieceType.Queen,
|
||||
PieceType.King, PieceType.Bishop, PieceType.Knight, PieceType.Rook
|
||||
PieceType.Rook,
|
||||
PieceType.Knight,
|
||||
PieceType.Bishop,
|
||||
PieceType.Queen,
|
||||
PieceType.King,
|
||||
PieceType.Bishop,
|
||||
PieceType.Knight,
|
||||
PieceType.Rook,
|
||||
)
|
||||
val entries = for
|
||||
fileIdx <- 0 until 8
|
||||
@@ -30,7 +36,7 @@ object Board:
|
||||
(Color.White, Rank.R1, backRank(fileIdx)),
|
||||
(Color.White, Rank.R2, PieceType.Pawn),
|
||||
(Color.Black, Rank.R8, backRank(fileIdx)),
|
||||
(Color.Black, Rank.R7, PieceType.Pawn)
|
||||
(Color.Black, Rank.R7, PieceType.Pawn),
|
||||
)
|
||||
yield Square(File.values(fileIdx), rank) -> Piece(color, pieceType)
|
||||
Board(entries.toMap)
|
||||
|
||||
@@ -1,50 +1,48 @@
|
||||
package de.nowchess.api.board
|
||||
|
||||
/**
|
||||
* Unified castling rights tracker for all four sides.
|
||||
* Tracks whether castling is still available for each side and direction.
|
||||
*
|
||||
* @param whiteKingSide White's king-side castling (0-0) still legally available
|
||||
* @param whiteQueenSide White's queen-side castling (0-0-0) still legally available
|
||||
* @param blackKingSide Black's king-side castling (0-0) still legally available
|
||||
* @param blackQueenSide Black's queen-side castling (0-0-0) still legally available
|
||||
*/
|
||||
/** Unified castling rights tracker for all four sides. Tracks whether castling is still available for each side and
|
||||
* direction.
|
||||
*
|
||||
* @param whiteKingSide
|
||||
* White's king-side castling (0-0) still legally available
|
||||
* @param whiteQueenSide
|
||||
* White's queen-side castling (0-0-0) still legally available
|
||||
* @param blackKingSide
|
||||
* Black's king-side castling (0-0) still legally available
|
||||
* @param blackQueenSide
|
||||
* Black's queen-side castling (0-0-0) still legally available
|
||||
*/
|
||||
final case class CastlingRights(
|
||||
whiteKingSide: Boolean,
|
||||
whiteQueenSide: Boolean,
|
||||
blackKingSide: Boolean,
|
||||
blackQueenSide: Boolean
|
||||
whiteKingSide: Boolean,
|
||||
whiteQueenSide: Boolean,
|
||||
blackKingSide: Boolean,
|
||||
blackQueenSide: Boolean,
|
||||
):
|
||||
/**
|
||||
* Check if either side has any castling rights remaining.
|
||||
*/
|
||||
/** Check if either side has any castling rights remaining.
|
||||
*/
|
||||
def hasAnyRights: Boolean =
|
||||
whiteKingSide || whiteQueenSide || blackKingSide || blackQueenSide
|
||||
|
||||
/**
|
||||
* Check if a specific color has any castling rights remaining.
|
||||
*/
|
||||
/** Check if a specific color has any castling rights remaining.
|
||||
*/
|
||||
def hasRights(color: Color): Boolean = color match
|
||||
case Color.White => whiteKingSide || whiteQueenSide
|
||||
case Color.Black => blackKingSide || blackQueenSide
|
||||
|
||||
/**
|
||||
* Revoke all castling rights for a specific color.
|
||||
*/
|
||||
/** Revoke all castling rights for a specific color.
|
||||
*/
|
||||
def revokeColor(color: Color): CastlingRights = color match
|
||||
case Color.White => copy(whiteKingSide = false, whiteQueenSide = false)
|
||||
case Color.Black => copy(blackKingSide = false, blackQueenSide = false)
|
||||
|
||||
/**
|
||||
* Revoke a specific castling right.
|
||||
*/
|
||||
/** Revoke a specific castling right.
|
||||
*/
|
||||
def revokeKingSide(color: Color): CastlingRights = color match
|
||||
case Color.White => copy(whiteKingSide = false)
|
||||
case Color.Black => copy(blackKingSide = false)
|
||||
|
||||
/**
|
||||
* Revoke a specific castling right.
|
||||
*/
|
||||
/** Revoke a specific castling right.
|
||||
*/
|
||||
def revokeQueenSide(color: Color): CastlingRights = color match
|
||||
case Color.White => copy(whiteQueenSide = false)
|
||||
case Color.Black => copy(blackQueenSide = false)
|
||||
@@ -55,7 +53,7 @@ object CastlingRights:
|
||||
whiteKingSide = false,
|
||||
whiteQueenSide = false,
|
||||
blackKingSide = false,
|
||||
blackQueenSide = false
|
||||
blackQueenSide = false,
|
||||
)
|
||||
|
||||
/** All castling rights available. */
|
||||
@@ -63,7 +61,7 @@ object CastlingRights:
|
||||
whiteKingSide = true,
|
||||
whiteQueenSide = true,
|
||||
blackKingSide = true,
|
||||
blackQueenSide = true
|
||||
blackQueenSide = true,
|
||||
)
|
||||
|
||||
/** Standard starting position castling rights (both sides can castle both ways). */
|
||||
|
||||
@@ -5,16 +5,16 @@ final case class Piece(color: Color, pieceType: PieceType)
|
||||
|
||||
object Piece:
|
||||
// Convenience constructors
|
||||
val WhitePawn: Piece = Piece(Color.White, PieceType.Pawn)
|
||||
val WhitePawn: Piece = Piece(Color.White, PieceType.Pawn)
|
||||
val WhiteKnight: Piece = Piece(Color.White, PieceType.Knight)
|
||||
val WhiteBishop: Piece = Piece(Color.White, PieceType.Bishop)
|
||||
val WhiteRook: Piece = Piece(Color.White, PieceType.Rook)
|
||||
val WhiteQueen: Piece = Piece(Color.White, PieceType.Queen)
|
||||
val WhiteKing: Piece = Piece(Color.White, PieceType.King)
|
||||
val WhiteRook: Piece = Piece(Color.White, PieceType.Rook)
|
||||
val WhiteQueen: Piece = Piece(Color.White, PieceType.Queen)
|
||||
val WhiteKing: Piece = Piece(Color.White, PieceType.King)
|
||||
|
||||
val BlackPawn: Piece = Piece(Color.Black, PieceType.Pawn)
|
||||
val BlackPawn: Piece = Piece(Color.Black, PieceType.Pawn)
|
||||
val BlackKnight: Piece = Piece(Color.Black, PieceType.Knight)
|
||||
val BlackBishop: Piece = Piece(Color.Black, PieceType.Bishop)
|
||||
val BlackRook: Piece = Piece(Color.Black, PieceType.Rook)
|
||||
val BlackQueen: Piece = Piece(Color.Black, PieceType.Queen)
|
||||
val BlackKing: Piece = Piece(Color.Black, PieceType.King)
|
||||
val BlackRook: Piece = Piece(Color.Black, PieceType.Rook)
|
||||
val BlackQueen: Piece = Piece(Color.Black, PieceType.Queen)
|
||||
val BlackKing: Piece = Piece(Color.Black, PieceType.King)
|
||||
|
||||
@@ -1,43 +1,38 @@
|
||||
package de.nowchess.api.board
|
||||
|
||||
/**
|
||||
* A file (column) on the chess board, a–h.
|
||||
* Ordinal values 0–7 correspond to a–h.
|
||||
*/
|
||||
/** A file (column) on the chess board, a–h. Ordinal values 0–7 correspond to a–h.
|
||||
*/
|
||||
enum File:
|
||||
case A, B, C, D, E, F, G, H
|
||||
|
||||
/**
|
||||
* A rank (row) on the chess board, 1–8.
|
||||
* Ordinal values 0–7 correspond to ranks 1–8.
|
||||
*/
|
||||
/** A rank (row) on the chess board, 1–8. Ordinal values 0–7 correspond to ranks 1–8.
|
||||
*/
|
||||
enum Rank:
|
||||
case R1, R2, R3, R4, R5, R6, R7, R8
|
||||
|
||||
/**
|
||||
* A unique square on the board, identified by its file and rank.
|
||||
*
|
||||
* @param file the column, a–h
|
||||
* @param rank the row, 1–8
|
||||
*/
|
||||
/** A unique square on the board, identified by its file and rank.
|
||||
*
|
||||
* @param file
|
||||
* the column, a–h
|
||||
* @param rank
|
||||
* the row, 1–8
|
||||
*/
|
||||
final case class Square(file: File, rank: Rank):
|
||||
/** Algebraic notation string, e.g. "e4". */
|
||||
override def toString: String =
|
||||
s"${file.toString.toLowerCase}${rank.ordinal + 1}"
|
||||
|
||||
object Square:
|
||||
/** Parse a square from algebraic notation (e.g. "e4").
|
||||
* Returns None if the input is not a valid square name. */
|
||||
/** Parse a square from algebraic notation (e.g. "e4"). Returns None if the input is not a valid square name.
|
||||
*/
|
||||
def fromAlgebraic(s: String): Option[Square] =
|
||||
if s.length != 2 then None
|
||||
else
|
||||
val fileChar = s.charAt(0)
|
||||
val rankChar = s.charAt(1)
|
||||
val fileOpt = File.values.find(_.toString.equalsIgnoreCase(fileChar.toString))
|
||||
val fileOpt = File.values.find(_.toString.equalsIgnoreCase(fileChar.toString))
|
||||
val rankOpt =
|
||||
rankChar.toString.toIntOption.flatMap(n =>
|
||||
if n >= 1 && n <= 8 then Some(Rank.values(n - 1)) else None
|
||||
)
|
||||
rankChar.toString.toIntOption.flatMap(n => if n >= 1 && n <= 8 then Some(Rank.values(n - 1)) else None)
|
||||
for f <- fileOpt; r <- rankOpt yield Square(f, r)
|
||||
|
||||
val all: IndexedSeq[Square] =
|
||||
@@ -46,12 +41,13 @@ object Square:
|
||||
f <- File.values.toIndexedSeq
|
||||
yield Square(f, r)
|
||||
|
||||
/** Compute a target square by offsetting file and rank.
|
||||
* Returns None if the resulting square is outside the board (0-7 range). */
|
||||
/** Compute a target square by offsetting file and rank. Returns None if the resulting square is outside the board
|
||||
* (0-7 range).
|
||||
*/
|
||||
extension (sq: Square)
|
||||
def offset(fileDelta: Int, rankDelta: Int): Option[Square] =
|
||||
val newFileOrd = sq.file.ordinal + fileDelta
|
||||
val newRankOrd = sq.rank.ordinal + rankDelta
|
||||
if newFileOrd >= 0 && newFileOrd < 8 && newRankOrd >= 0 && newRankOrd < 8 then
|
||||
Some(Square(File.values(newFileOrd), Rank.values(newRankOrd)))
|
||||
else None
|
||||
else None
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
package de.nowchess.api.bot
|
||||
|
||||
import de.nowchess.api.game.GameContext
|
||||
import de.nowchess.api.move.Move
|
||||
|
||||
trait Bot {
|
||||
|
||||
def name: String
|
||||
def nextMove(context: GameContext): Option[Move]
|
||||
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
package de.nowchess.api.game
|
||||
|
||||
/** Reason why a game ended in a draw. */
|
||||
enum DrawReason:
|
||||
case Stalemate
|
||||
case InsufficientMaterial
|
||||
case FiftyMoveRule
|
||||
case ThreefoldRepetition
|
||||
case Agreement
|
||||
@@ -1,19 +1,27 @@
|
||||
package de.nowchess.api.game
|
||||
|
||||
import de.nowchess.api.board.{Board, Color, Square, CastlingRights}
|
||||
import de.nowchess.api.board.{Board, CastlingRights, Color, PieceType, Square}
|
||||
import de.nowchess.api.move.Move
|
||||
|
||||
/** Immutable bundle of complete game state.
|
||||
* All state changes produce new GameContext instances.
|
||||
*/
|
||||
/** Immutable bundle of complete game state. All state changes produce new GameContext instances.
|
||||
*/
|
||||
case class GameContext(
|
||||
board: Board,
|
||||
turn: Color,
|
||||
castlingRights: CastlingRights,
|
||||
enPassantSquare: Option[Square],
|
||||
halfMoveClock: Int,
|
||||
moves: List[Move]
|
||||
board: Board,
|
||||
turn: Color,
|
||||
castlingRights: CastlingRights,
|
||||
enPassantSquare: Option[Square],
|
||||
halfMoveClock: Int,
|
||||
moves: List[Move],
|
||||
result: Option[GameResult] = None,
|
||||
initialBoard: Board = Board.initial,
|
||||
):
|
||||
private lazy val whiteKingSquare: Option[Square] =
|
||||
board.pieces.find((_, p) => p.color == Color.White && p.pieceType == PieceType.King).map(_._1)
|
||||
private lazy val blackKingSquare: Option[Square] =
|
||||
board.pieces.find((_, p) => p.color == Color.Black && p.pieceType == PieceType.King).map(_._1)
|
||||
def kingSquare(color: Color): Option[Square] =
|
||||
if color == Color.White then whiteKingSquare else blackKingSquare
|
||||
|
||||
/** Create new context with updated board. */
|
||||
def withBoard(newBoard: Board): GameContext = copy(board = newBoard)
|
||||
|
||||
@@ -32,6 +40,9 @@ case class GameContext(
|
||||
/** Create new context with move appended to history. */
|
||||
def withMove(move: Move): GameContext = copy(moves = moves :+ move)
|
||||
|
||||
/** Create new context with updated result. */
|
||||
def withResult(newResult: Option[GameResult]): GameContext = copy(result = newResult)
|
||||
|
||||
object GameContext:
|
||||
/** Initial position: white to move, all castling rights, no en passant. */
|
||||
def initial: GameContext = GameContext(
|
||||
@@ -40,5 +51,5 @@ object GameContext:
|
||||
castlingRights = CastlingRights.Initial,
|
||||
enPassantSquare = None,
|
||||
halfMoveClock = 0,
|
||||
moves = List.empty
|
||||
moves = List.empty,
|
||||
)
|
||||
|
||||
@@ -0,0 +1,8 @@
|
||||
package de.nowchess.api.game
|
||||
|
||||
import de.nowchess.api.board.Color
|
||||
|
||||
/** Outcome of a finished game. */
|
||||
enum GameResult:
|
||||
case Win(color: Color)
|
||||
case Draw(reason: DrawReason)
|
||||
@@ -0,0 +1,8 @@
|
||||
package de.nowchess.api.game
|
||||
|
||||
import de.nowchess.api.bot.Bot
|
||||
import de.nowchess.api.player.PlayerInfo
|
||||
|
||||
sealed trait Participant
|
||||
final case class Human(playerInfo: PlayerInfo) extends Participant
|
||||
final case class BotParticipant(bot: Bot) extends Participant
|
||||
@@ -10,24 +10,30 @@ enum PromotionPiece:
|
||||
enum MoveType:
|
||||
/** A normal move or capture with no special rule. */
|
||||
case Normal(isCapture: Boolean = false)
|
||||
|
||||
/** Kingside castling (O-O). */
|
||||
case CastleKingside
|
||||
|
||||
/** Queenside castling (O-O-O). */
|
||||
case CastleQueenside
|
||||
|
||||
/** En-passant pawn capture. */
|
||||
case EnPassant
|
||||
|
||||
/** Pawn promotion; carries the chosen promotion piece. */
|
||||
case Promotion(piece: PromotionPiece)
|
||||
|
||||
/**
|
||||
* A half-move (ply) in a chess game.
|
||||
*
|
||||
* @param from origin square
|
||||
* @param to destination square
|
||||
* @param moveType special semantics; defaults to Normal
|
||||
*/
|
||||
/** A half-move (ply) in a chess game.
|
||||
*
|
||||
* @param from
|
||||
* origin square
|
||||
* @param to
|
||||
* destination square
|
||||
* @param moveType
|
||||
* special semantics; defaults to Normal
|
||||
*/
|
||||
final case class Move(
|
||||
from: Square,
|
||||
to: Square,
|
||||
moveType: MoveType = MoveType.Normal()
|
||||
from: Square,
|
||||
to: Square,
|
||||
moveType: MoveType = MoveType.Normal(),
|
||||
)
|
||||
|
||||
@@ -1,27 +1,26 @@
|
||||
package de.nowchess.api.player
|
||||
|
||||
/**
|
||||
* An opaque player identifier.
|
||||
*
|
||||
* Wraps a plain String so that IDs are not accidentally interchanged with
|
||||
* other String values at compile time.
|
||||
*/
|
||||
/** An opaque player identifier.
|
||||
*
|
||||
* Wraps a plain String so that IDs are not accidentally interchanged with other String values at compile time.
|
||||
*/
|
||||
opaque type PlayerId = String
|
||||
|
||||
object PlayerId:
|
||||
def apply(value: String): PlayerId = value
|
||||
def apply(value: String): PlayerId = value
|
||||
extension (id: PlayerId) def value: String = id
|
||||
|
||||
/**
|
||||
* The minimal cross-service identity stub for a player.
|
||||
*
|
||||
* Full profile data (email, rating history, etc.) lives in the user-management
|
||||
* service. Only what every service needs is held here.
|
||||
*
|
||||
* @param id unique identifier
|
||||
* @param displayName human-readable name shown in the UI
|
||||
*/
|
||||
/** The minimal cross-service identity stub for a player.
|
||||
*
|
||||
* Full profile data (email, rating history, etc.) lives in the user-management service. Only what every service needs
|
||||
* is held here.
|
||||
*
|
||||
* @param id
|
||||
* unique identifier
|
||||
* @param displayName
|
||||
* human-readable name shown in the UI
|
||||
*/
|
||||
final case class PlayerInfo(
|
||||
id: PlayerId,
|
||||
displayName: String
|
||||
id: PlayerId,
|
||||
displayName: String,
|
||||
)
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
package de.nowchess.api.response
|
||||
|
||||
/**
|
||||
* A standardised envelope for every API response.
|
||||
*
|
||||
* Success and failure are modelled as subtypes so that callers
|
||||
* can pattern-match exhaustively.
|
||||
*
|
||||
* @tparam A the payload type for a successful response
|
||||
*/
|
||||
/** A standardised envelope for every API response.
|
||||
*
|
||||
* Success and failure are modelled as subtypes so that callers can pattern-match exhaustively.
|
||||
*
|
||||
* @tparam A
|
||||
* the payload type for a successful response
|
||||
*/
|
||||
sealed trait ApiResponse[+A]
|
||||
|
||||
object ApiResponse:
|
||||
@@ -20,43 +19,49 @@ object ApiResponse:
|
||||
/** Convenience constructor for a single-error failure. */
|
||||
def error(err: ApiError): Failure = Failure(List(err))
|
||||
|
||||
/**
|
||||
* A structured error descriptor.
|
||||
*
|
||||
* @param code machine-readable error code (e.g. "INVALID_MOVE", "NOT_FOUND")
|
||||
* @param message human-readable explanation
|
||||
* @param field optional field name when the error relates to a specific input
|
||||
*/
|
||||
/** A structured error descriptor.
|
||||
*
|
||||
* @param code
|
||||
* machine-readable error code (e.g. "INVALID_MOVE", "NOT_FOUND")
|
||||
* @param message
|
||||
* human-readable explanation
|
||||
* @param field
|
||||
* optional field name when the error relates to a specific input
|
||||
*/
|
||||
final case class ApiError(
|
||||
code: String,
|
||||
message: String,
|
||||
field: Option[String] = None
|
||||
code: String,
|
||||
message: String,
|
||||
field: Option[String] = None,
|
||||
)
|
||||
|
||||
/**
|
||||
* Pagination metadata for list responses.
|
||||
*
|
||||
* @param page current 0-based page index
|
||||
* @param pageSize number of items per page
|
||||
* @param totalItems total number of items across all pages
|
||||
*/
|
||||
/** Pagination metadata for list responses.
|
||||
*
|
||||
* @param page
|
||||
* current 0-based page index
|
||||
* @param pageSize
|
||||
* number of items per page
|
||||
* @param totalItems
|
||||
* total number of items across all pages
|
||||
*/
|
||||
final case class Pagination(
|
||||
page: Int,
|
||||
pageSize: Int,
|
||||
totalItems: Long
|
||||
page: Int,
|
||||
pageSize: Int,
|
||||
totalItems: Long,
|
||||
):
|
||||
def totalPages: Int =
|
||||
if pageSize <= 0 then 0
|
||||
else Math.ceil(totalItems.toDouble / pageSize).toInt
|
||||
|
||||
/**
|
||||
* A paginated list response envelope.
|
||||
*
|
||||
* @param items the items on the current page
|
||||
* @param pagination pagination metadata
|
||||
* @tparam A the item type
|
||||
*/
|
||||
/** A paginated list response envelope.
|
||||
*
|
||||
* @param items
|
||||
* the items on the current page
|
||||
* @param pagination
|
||||
* pagination metadata
|
||||
* @tparam A
|
||||
* the item type
|
||||
*/
|
||||
final case class PagedResponse[A](
|
||||
items: List[A],
|
||||
pagination: Pagination
|
||||
items: List[A],
|
||||
pagination: Pagination,
|
||||
)
|
||||
|
||||
@@ -22,9 +22,9 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
}
|
||||
|
||||
test("withMove returns captured piece when destination is occupied") {
|
||||
val from = Square(File.A, Rank.R1)
|
||||
val to = Square(File.A, Rank.R8)
|
||||
val b = Board(Map(from -> Piece.WhiteRook, to -> Piece.BlackRook))
|
||||
val from = Square(File.A, Rank.R1)
|
||||
val to = Square(File.A, Rank.R8)
|
||||
val b = Board(Map(from -> Piece.WhiteRook, to -> Piece.BlackRook))
|
||||
val (board, captured) = b.withMove(from, to)
|
||||
captured shouldBe Some(Piece.BlackRook)
|
||||
board.pieceAt(to) shouldBe Some(Piece.WhiteRook)
|
||||
@@ -51,8 +51,14 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
|
||||
test("initial board white back rank") {
|
||||
val expectedBackRank = Vector(
|
||||
PieceType.Rook, PieceType.Knight, PieceType.Bishop, PieceType.Queen,
|
||||
PieceType.King, PieceType.Bishop, PieceType.Knight, PieceType.Rook
|
||||
PieceType.Rook,
|
||||
PieceType.Knight,
|
||||
PieceType.Bishop,
|
||||
PieceType.Queen,
|
||||
PieceType.King,
|
||||
PieceType.Bishop,
|
||||
PieceType.Knight,
|
||||
PieceType.Rook,
|
||||
)
|
||||
File.values.zipWithIndex.foreach { (file, i) =>
|
||||
Board.initial.pieceAt(Square(file, Rank.R1)) shouldBe
|
||||
@@ -62,8 +68,14 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
|
||||
test("initial board black back rank") {
|
||||
val expectedBackRank = Vector(
|
||||
PieceType.Rook, PieceType.Knight, PieceType.Bishop, PieceType.Queen,
|
||||
PieceType.King, PieceType.Bishop, PieceType.Knight, PieceType.Rook
|
||||
PieceType.Rook,
|
||||
PieceType.Knight,
|
||||
PieceType.Bishop,
|
||||
PieceType.Queen,
|
||||
PieceType.King,
|
||||
PieceType.Bishop,
|
||||
PieceType.Knight,
|
||||
PieceType.Rook,
|
||||
)
|
||||
File.values.zipWithIndex.foreach { (file, i) =>
|
||||
Board.initial.pieceAt(Square(file, Rank.R8)) shouldBe
|
||||
@@ -76,12 +88,11 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
for
|
||||
rank <- emptyRanks
|
||||
file <- File.values
|
||||
do
|
||||
Board.initial.pieceAt(Square(file, rank)) shouldBe None
|
||||
do Board.initial.pieceAt(Square(file, rank)) shouldBe None
|
||||
}
|
||||
|
||||
test("updated adds and replaces piece at squares") {
|
||||
val b = Board(Map(e2 -> Piece.WhitePawn))
|
||||
val b = Board(Map(e2 -> Piece.WhitePawn))
|
||||
val added = b.updated(e4, Piece.WhiteKnight)
|
||||
added.pieceAt(e2) shouldBe Some(Piece.WhitePawn)
|
||||
added.pieceAt(e4) shouldBe Some(Piece.WhiteKnight)
|
||||
@@ -91,7 +102,7 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
}
|
||||
|
||||
test("removed deletes piece from board") {
|
||||
val b = Board(Map(e2 -> Piece.WhitePawn, e4 -> Piece.WhiteKnight))
|
||||
val b = Board(Map(e2 -> Piece.WhitePawn, e4 -> Piece.WhiteKnight))
|
||||
val removed = b.removed(e2)
|
||||
removed.pieceAt(e2) shouldBe None
|
||||
removed.pieceAt(e4) shouldBe Some(Piece.WhiteKnight)
|
||||
@@ -105,4 +116,3 @@ class BoardTest extends AnyFunSuite with Matchers:
|
||||
moved.pieceAt(e4) shouldBe Some(Piece.WhitePawn)
|
||||
moved.pieceAt(e2) shouldBe None
|
||||
}
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ class CastlingRightsTest extends AnyFunSuite with Matchers:
|
||||
whiteKingSide = true,
|
||||
whiteQueenSide = false,
|
||||
blackKingSide = false,
|
||||
blackQueenSide = true
|
||||
blackQueenSide = true,
|
||||
)
|
||||
|
||||
rights.hasAnyRights shouldBe true
|
||||
@@ -54,4 +54,3 @@ class CastlingRightsTest extends AnyFunSuite with Matchers:
|
||||
val blackQueenSideRevoked = all.revokeQueenSide(Color.Black)
|
||||
blackQueenSideRevoked.blackKingSide shouldBe true
|
||||
blackQueenSideRevoked.blackQueenSide shouldBe false
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ class ColorTest extends AnyFunSuite with Matchers:
|
||||
test("Color values expose opposite and label consistently"):
|
||||
val cases = List(
|
||||
(Color.White, Color.Black, "White"),
|
||||
(Color.Black, Color.White, "Black")
|
||||
(Color.Black, Color.White, "Black"),
|
||||
)
|
||||
|
||||
cases.foreach { (color, opposite, label) =>
|
||||
|
||||
@@ -7,24 +7,24 @@ class PieceTest extends AnyFunSuite with Matchers:
|
||||
|
||||
test("Piece holds color and pieceType") {
|
||||
val p = Piece(Color.White, PieceType.Queen)
|
||||
p.color shouldBe Color.White
|
||||
p.color shouldBe Color.White
|
||||
p.pieceType shouldBe PieceType.Queen
|
||||
}
|
||||
|
||||
test("all convenience constants map to expected color and piece type") {
|
||||
val expected = List(
|
||||
Piece.WhitePawn -> Piece(Color.White, PieceType.Pawn),
|
||||
Piece.WhitePawn -> Piece(Color.White, PieceType.Pawn),
|
||||
Piece.WhiteKnight -> Piece(Color.White, PieceType.Knight),
|
||||
Piece.WhiteBishop -> Piece(Color.White, PieceType.Bishop),
|
||||
Piece.WhiteRook -> Piece(Color.White, PieceType.Rook),
|
||||
Piece.WhiteQueen -> Piece(Color.White, PieceType.Queen),
|
||||
Piece.WhiteKing -> Piece(Color.White, PieceType.King),
|
||||
Piece.BlackPawn -> Piece(Color.Black, PieceType.Pawn),
|
||||
Piece.WhiteRook -> Piece(Color.White, PieceType.Rook),
|
||||
Piece.WhiteQueen -> Piece(Color.White, PieceType.Queen),
|
||||
Piece.WhiteKing -> Piece(Color.White, PieceType.King),
|
||||
Piece.BlackPawn -> Piece(Color.Black, PieceType.Pawn),
|
||||
Piece.BlackKnight -> Piece(Color.Black, PieceType.Knight),
|
||||
Piece.BlackBishop -> Piece(Color.Black, PieceType.Bishop),
|
||||
Piece.BlackRook -> Piece(Color.Black, PieceType.Rook),
|
||||
Piece.BlackQueen -> Piece(Color.Black, PieceType.Queen),
|
||||
Piece.BlackKing -> Piece(Color.Black, PieceType.King)
|
||||
Piece.BlackRook -> Piece(Color.Black, PieceType.Rook),
|
||||
Piece.BlackQueen -> Piece(Color.Black, PieceType.Queen),
|
||||
Piece.BlackKing -> Piece(Color.Black, PieceType.King),
|
||||
)
|
||||
|
||||
expected.foreach { case (actual, wanted) =>
|
||||
|
||||
@@ -7,12 +7,12 @@ class PieceTypeTest extends AnyFunSuite with Matchers:
|
||||
|
||||
test("PieceType values expose the expected labels"):
|
||||
val expectedLabels = List(
|
||||
PieceType.Pawn -> "Pawn",
|
||||
PieceType.Pawn -> "Pawn",
|
||||
PieceType.Knight -> "Knight",
|
||||
PieceType.Bishop -> "Bishop",
|
||||
PieceType.Rook -> "Rook",
|
||||
PieceType.Queen -> "Queen",
|
||||
PieceType.King -> "King"
|
||||
PieceType.Rook -> "Rook",
|
||||
PieceType.Queen -> "Queen",
|
||||
PieceType.King -> "King",
|
||||
)
|
||||
|
||||
expectedLabels.foreach { (pieceType, expectedLabel) =>
|
||||
|
||||
@@ -16,7 +16,7 @@ class SquareTest extends AnyFunSuite with Matchers:
|
||||
"a1" -> Square(File.A, Rank.R1),
|
||||
"e4" -> Square(File.E, Rank.R4),
|
||||
"h8" -> Square(File.H, Rank.R8),
|
||||
"E4" -> Square(File.E, Rank.R4)
|
||||
"E4" -> Square(File.E, Rank.R4),
|
||||
)
|
||||
expected.foreach { case (raw, sq) =>
|
||||
Square.fromAlgebraic(raw) shouldBe Some(sq)
|
||||
@@ -34,4 +34,3 @@ class SquareTest extends AnyFunSuite with Matchers:
|
||||
Square(File.A, Rank.R1).offset(-1, 0) shouldBe None
|
||||
Square(File.H, Rank.R8).offset(0, 1) shouldBe None
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package de.nowchess.api.game
|
||||
|
||||
import de.nowchess.api.board.{Board, CastlingRights, Color, File, Rank, Square}
|
||||
import de.nowchess.api.move.Move
|
||||
import de.nowchess.api.game.{DrawReason, GameResult}
|
||||
import org.scalatest.funsuite.AnyFunSuite
|
||||
import org.scalatest.matchers.should.Matchers
|
||||
|
||||
@@ -16,11 +17,12 @@ class GameContextTest extends AnyFunSuite with Matchers:
|
||||
initial.enPassantSquare shouldBe None
|
||||
initial.halfMoveClock shouldBe 0
|
||||
initial.moves shouldBe List.empty
|
||||
initial.result shouldBe None
|
||||
|
||||
test("withBoard updates only board"):
|
||||
val square = Square(File.E, Rank.R4)
|
||||
val square = Square(File.E, Rank.R4)
|
||||
val updatedBoard = Board.initial.updated(square, de.nowchess.api.board.Piece.WhiteQueen)
|
||||
val updated = GameContext.initial.withBoard(updatedBoard)
|
||||
val updated = GameContext.initial.withBoard(updatedBoard)
|
||||
updated.board shouldBe updatedBoard
|
||||
updated.turn shouldBe GameContext.initial.turn
|
||||
updated.castlingRights shouldBe GameContext.initial.castlingRights
|
||||
@@ -34,13 +36,13 @@ class GameContextTest extends AnyFunSuite with Matchers:
|
||||
whiteKingSide = true,
|
||||
whiteQueenSide = false,
|
||||
blackKingSide = false,
|
||||
blackQueenSide = true
|
||||
blackQueenSide = true,
|
||||
)
|
||||
val square = Some(Square(File.E, Rank.R3))
|
||||
val updatedTurn = initial.withTurn(Color.Black)
|
||||
val square = Some(Square(File.E, Rank.R3))
|
||||
val updatedTurn = initial.withTurn(Color.Black)
|
||||
val updatedRights = initial.withCastlingRights(rights)
|
||||
val updatedEp = initial.withEnPassantSquare(square)
|
||||
val updatedClock = initial.withHalfMoveClock(17)
|
||||
val updatedEp = initial.withEnPassantSquare(square)
|
||||
val updatedClock = initial.withHalfMoveClock(17)
|
||||
|
||||
updatedTurn.turn shouldBe Color.Black
|
||||
updatedTurn.board shouldBe initial.board
|
||||
@@ -58,3 +60,20 @@ class GameContextTest extends AnyFunSuite with Matchers:
|
||||
val move = Move(Square(File.E, Rank.R2), Square(File.E, Rank.R4))
|
||||
GameContext.initial.withMove(move).moves shouldBe List(move)
|
||||
|
||||
test("withResult sets Win result"):
|
||||
val win = Some(GameResult.Win(Color.White))
|
||||
GameContext.initial.withResult(win).result shouldBe win
|
||||
|
||||
test("withResult sets Draw result"):
|
||||
val draw = Some(GameResult.Draw(DrawReason.Stalemate))
|
||||
GameContext.initial.withResult(draw).result shouldBe draw
|
||||
|
||||
test("withResult clears result"):
|
||||
val ctx = GameContext.initial.withResult(Some(GameResult.Win(Color.Black)))
|
||||
ctx.withResult(None).result shouldBe None
|
||||
|
||||
test("kingSquare returns white king position"):
|
||||
GameContext.initial.kingSquare(Color.White) shouldBe Some(Square(File.E, Rank.R1))
|
||||
|
||||
test("kingSquare returns black king position"):
|
||||
GameContext.initial.kingSquare(Color.Black) shouldBe Some(Square(File.E, Rank.R8))
|
||||
|
||||
@@ -25,7 +25,7 @@ class MoveTest extends AnyFunSuite with Matchers:
|
||||
MoveType.Promotion(PromotionPiece.Queen),
|
||||
MoveType.Promotion(PromotionPiece.Rook),
|
||||
MoveType.Promotion(PromotionPiece.Bishop),
|
||||
MoveType.Promotion(PromotionPiece.Knight)
|
||||
MoveType.Promotion(PromotionPiece.Knight),
|
||||
)
|
||||
|
||||
moveTypes.foreach { moveType =>
|
||||
|
||||
@@ -7,12 +7,12 @@ class PlayerInfoTest extends AnyFunSuite with Matchers:
|
||||
|
||||
test("PlayerId and PlayerInfo preserve constructor values") {
|
||||
val raw = "player-123"
|
||||
val id = PlayerId(raw)
|
||||
val id = PlayerId(raw)
|
||||
|
||||
id.value shouldBe raw
|
||||
|
||||
val playerId = PlayerId("p1")
|
||||
val info = PlayerInfo(playerId, "Magnus")
|
||||
info.id.value shouldBe "p1"
|
||||
info.displayName shouldBe "Magnus"
|
||||
val info = PlayerInfo(playerId, "Magnus")
|
||||
info.id.value shouldBe "p1"
|
||||
info.displayName shouldBe "Magnus"
|
||||
}
|
||||
|
||||
@@ -14,9 +14,9 @@ class ApiResponseTest extends AnyFunSuite with Matchers:
|
||||
ApiResponse.error(err) shouldBe ApiResponse.Failure(List(err))
|
||||
|
||||
val e = ApiError("CODE", "message")
|
||||
e.code shouldBe "CODE"
|
||||
e.code shouldBe "CODE"
|
||||
e.message shouldBe "message"
|
||||
e.field shouldBe None
|
||||
e.field shouldBe None
|
||||
ApiError("INVALID", "bad value", Some("email")).field shouldBe Some("email")
|
||||
}
|
||||
|
||||
@@ -31,6 +31,6 @@ class ApiResponseTest extends AnyFunSuite with Matchers:
|
||||
test("PagedResponse holds items and pagination") {
|
||||
val pagination = Pagination(page = 1, pageSize = 5, totalItems = 20)
|
||||
val pr = PagedResponse(List("a", "b"), pagination)
|
||||
pr.items shouldBe List("a", "b")
|
||||
pr.items shouldBe List("a", "b")
|
||||
pr.pagination shouldBe pagination
|
||||
}
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
MAJOR=0
|
||||
MINOR=2
|
||||
MINOR=7
|
||||
PATCH=0
|
||||
|
||||
@@ -0,0 +1,86 @@
|
||||
plugins {
|
||||
id("scala")
|
||||
id("org.scoverage")
|
||||
}
|
||||
|
||||
group = "de.nowchess"
|
||||
version = "1.0-SNAPSHOT"
|
||||
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
val versions = rootProject.extra["VERSIONS"] as Map<String, String>
|
||||
|
||||
repositories {
|
||||
mavenCentral()
|
||||
}
|
||||
|
||||
scala {
|
||||
scalaVersion = versions["SCALA3"]!!
|
||||
}
|
||||
|
||||
scoverage {
|
||||
scoverageVersion.set(versions["SCOVERAGE"]!!)
|
||||
excludedPackages.set(
|
||||
listOf(
|
||||
"de\\.nowchess\\.bot\\.bots\\.NNUEBot",
|
||||
"de\\.nowchess\\.bot\\.bots\\.nnue\\..*",
|
||||
"de\\.nowchess\\.bot\\.util\\.PolyglotBook",
|
||||
)
|
||||
)
|
||||
excludedFiles.set(
|
||||
listOf(
|
||||
".*NNUE\\.scala",
|
||||
".*NNUEBot\\.scala",
|
||||
".*NbaiLoader\\.scala",
|
||||
".*NbaiMigrator\\.scala",
|
||||
".*NbaiWriter\\.scala",
|
||||
".*PolyglotBook\\.scala",
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
tasks.withType<ScalaCompile> {
|
||||
scalaCompileOptions.additionalParameters = listOf("-encoding", "UTF-8")
|
||||
}
|
||||
|
||||
dependencies {
|
||||
|
||||
implementation("org.scala-lang:scala3-compiler_3") {
|
||||
version {
|
||||
strictly(versions["SCALA3"]!!)
|
||||
}
|
||||
}
|
||||
implementation("org.scala-lang:scala3-library_3") {
|
||||
version {
|
||||
strictly(versions["SCALA3"]!!)
|
||||
}
|
||||
}
|
||||
|
||||
implementation(project(":modules:api"))
|
||||
implementation(project(":modules:io"))
|
||||
implementation(project(":modules:rule"))
|
||||
implementation("com.microsoft.onnxruntime:onnxruntime:${versions["ONNXRUNTIME"]!!}")
|
||||
|
||||
testImplementation(platform("org.junit:junit-bom:${versions["JUNIT_BOM"]!!}"))
|
||||
testImplementation("org.junit.jupiter:junit-jupiter")
|
||||
testImplementation("org.scalatest:scalatest_3:${versions["SCALATEST"]!!}")
|
||||
testImplementation("co.helmethair:scalatest-junit-runner:${versions["SCALATEST_JUNIT"]!!}")
|
||||
|
||||
testRuntimeOnly("org.junit.platform:junit-platform-launcher")
|
||||
}
|
||||
|
||||
tasks.test {
|
||||
useJUnitPlatform {
|
||||
includeEngines("scalatest")
|
||||
testLogging {
|
||||
events("skipped", "failed")
|
||||
}
|
||||
}
|
||||
finalizedBy(tasks.reportScoverage)
|
||||
}
|
||||
tasks.reportScoverage {
|
||||
dependsOn(tasks.test)
|
||||
}
|
||||
|
||||
tasks.jar {
|
||||
duplicatesStrategy = DuplicatesStrategy.EXCLUDE
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,22 @@
|
||||
# Data and weights are local artifacts, not committed
|
||||
data/
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
.venv
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
tactical_data/
|
||||
trainingdata/
|
||||
/datasets/
|
||||
@@ -0,0 +1,173 @@
|
||||
# Training Dataset Management
|
||||
|
||||
The NNUE training pipeline now features versioned dataset management, similar to model versioning. This prevents data loss and allows you to maintain multiple training configurations.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
datasets/
|
||||
ds_v1/
|
||||
labeled.jsonl # Training data: {"fen": "...", "eval": 0.5, "eval_raw": 150}
|
||||
metadata.json # Version info and composition
|
||||
ds_v2/
|
||||
labeled.jsonl
|
||||
metadata.json
|
||||
```
|
||||
|
||||
## Metadata Schema
|
||||
|
||||
Each dataset has a `metadata.json` file tracking its composition:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"created": "2026-04-13T15:30:45.123456",
|
||||
"total_positions": 1000000,
|
||||
"stockfish_depth": 12,
|
||||
"sources": [
|
||||
{
|
||||
"type": "generated",
|
||||
"count": 500000,
|
||||
"params": {
|
||||
"num_positions": 500000,
|
||||
"min_move": 1,
|
||||
"max_move": 50
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "tactical",
|
||||
"count": 300000,
|
||||
"max_puzzles": 300000
|
||||
},
|
||||
{
|
||||
"type": "file_import",
|
||||
"count": 200000,
|
||||
"path": "/path/to/original_file.txt"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## TUI Workflow
|
||||
|
||||
### Main Menu
|
||||
```
|
||||
1 - Manage Training Data
|
||||
2 - Train Model
|
||||
3 - Export Model
|
||||
4 - Exit
|
||||
```
|
||||
|
||||
### Training Data Management Submenu
|
||||
```
|
||||
1 - Create new dataset
|
||||
2 - Extend existing dataset
|
||||
3 - View all datasets
|
||||
4 - Delete dataset
|
||||
5 - Back
|
||||
```
|
||||
|
||||
## Creating a Dataset
|
||||
|
||||
Use the "Create new dataset" option to add data from one or more sources:
|
||||
|
||||
1. **Generate random positions** — Play random games and sample positions
|
||||
- Number of positions
|
||||
- Move range (min/max move number to sample from)
|
||||
- Number of worker threads
|
||||
|
||||
2. **Import from file** — Load positions from a FEN file
|
||||
- File must contain one FEN string per line
|
||||
- Duplicates are automatically removed
|
||||
|
||||
3. **Extract tactical puzzles** — Download and extract Lichess puzzle database
|
||||
- Maximum number of puzzles to include
|
||||
- Automatically filters for tactical themes (forks, pins, mates, etc.)
|
||||
|
||||
You can combine multiple sources in a single dataset creation session. All positions are:
|
||||
- Deduplicated (only unique FENs are kept)
|
||||
- Labeled with Stockfish evaluations
|
||||
- Saved to `datasets/ds_vN/labeled.jsonl`
|
||||
|
||||
## Extending a Dataset
|
||||
|
||||
Use "Extend existing dataset" to add more positions to an existing dataset:
|
||||
|
||||
1. Select the dataset version to extend
|
||||
2. Choose data sources (same options as creation)
|
||||
3. Confirm labeling parameters
|
||||
4. New positions are:
|
||||
- Labeled with Stockfish
|
||||
- Deduplicated against the target dataset (preventing duplicates)
|
||||
- Merged into the existing `labeled.jsonl`
|
||||
- Metadata is updated with the new source entry
|
||||
|
||||
## Training with a Dataset
|
||||
|
||||
When you start training (Standard or Burst mode), you'll be prompted to select a dataset version. The TUI will display all available datasets with:
|
||||
- Version number
|
||||
- Total number of positions
|
||||
- Source types (generated, tactical, imported)
|
||||
- Stockfish depth used
|
||||
- Creation date
|
||||
|
||||
## Legacy Data Migration
|
||||
|
||||
If you have existing labeled data in `data/training_data.jsonl` from before this update:
|
||||
|
||||
1. Open the "Manage Training Data" menu
|
||||
2. Choose "Create new dataset"
|
||||
3. Select "Import from file"
|
||||
4. Point to `data/training_data.jsonl`
|
||||
5. Complete the dataset creation
|
||||
|
||||
Alternatively, you can manually copy the file to `datasets/ds_v1/labeled.jsonl` and create a `metadata.json` file.
|
||||
|
||||
## Viewing Dataset Details
|
||||
|
||||
Use "View all datasets" to see a table of all datasets with:
|
||||
- Version number
|
||||
- Position count
|
||||
- Source composition
|
||||
- Stockfish depth
|
||||
- Creation date
|
||||
|
||||
## Deleting a Dataset
|
||||
|
||||
Use "Delete dataset" to remove a dataset and free up disk space. **This action cannot be undone.**
|
||||
|
||||
⚠️ The system does not prevent deleting datasets used by model checkpoints. Plan accordingly.
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Deduplication Strategy
|
||||
|
||||
When extending a dataset, positions are deduplicated **within that dataset only**. This allows different datasets to contain overlapping positions if desired.
|
||||
|
||||
When creating a new dataset from multiple sources, all sources are combined and deduplicated before labeling.
|
||||
|
||||
### Labeled Position Format
|
||||
|
||||
Each line in `labeled.jsonl` is a JSON object:
|
||||
```json
|
||||
{
|
||||
"fen": "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1",
|
||||
"eval": 0.0,
|
||||
"eval_raw": 0
|
||||
}
|
||||
```
|
||||
|
||||
- `fen`: The position in Forsyth-Edwards Notation
|
||||
- `eval`: Normalized evaluation ([-1, 1] range using tanh)
|
||||
- `eval_raw`: Raw Stockfish evaluation in centipawns
|
||||
|
||||
### Storage Location
|
||||
|
||||
Datasets are stored in the `datasets/` directory relative to the script location. The old `data/` directory is preserved for backward compatibility but is not actively used by the new system.
|
||||
|
||||
## Performance Tips
|
||||
|
||||
- **Smaller datasets train faster** — Start with 100k-500k positions
|
||||
- **Deduplication matters** — Use the extend functionality to build up your dataset without redundant data
|
||||
- **Stockfish depth** — Depth 12-14 balances accuracy and labeling speed
|
||||
- **Workers** — Use 4-8 workers for labeling if your machine supports it; more workers = faster but uses more CPU/memory
|
||||
@@ -0,0 +1,129 @@
|
||||
# NNUE Python Pipeline
|
||||
|
||||
Central CLI for training and exporting chess evaluation neural networks (NNUE).
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
python/
|
||||
├── nnue.py # Main CLI entry point
|
||||
├── src/ # Python modules
|
||||
│ ├── generate.py # Generate random chess positions
|
||||
│ ├── label.py # Label positions with Stockfish
|
||||
│ ├── train.py # Train NNUE model
|
||||
│ └── export.py # Export weights to Scala
|
||||
├── data/ # Training data (gitignored)
|
||||
│ ├── positions.txt
|
||||
│ └── training_data.jsonl
|
||||
└── weights/ # Model weights (gitignored)
|
||||
├── nnue_weights_v1.pt
|
||||
├── nnue_weights_v1_metadata.json
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Train a new model (500k positions, auto-detect checkpoint)
|
||||
python nnue.py train
|
||||
|
||||
# Train from specific checkpoint
|
||||
python nnue.py train --from-checkpoint 2
|
||||
|
||||
# Train with custom games count
|
||||
python nnue.py train --games 200000
|
||||
|
||||
# Train with custom positions file
|
||||
python nnue.py train --positions-file my_positions.txt
|
||||
|
||||
# Export specific version to Scala
|
||||
python nnue.py export 2
|
||||
|
||||
# List all checkpoints
|
||||
python nnue.py list
|
||||
```
|
||||
|
||||
## CLI Commands
|
||||
|
||||
### `train` - Train NNUE model
|
||||
|
||||
```bash
|
||||
python nnue.py train [OPTIONS]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--from-checkpoint N` - Resume from checkpoint version N (default: uses latest)
|
||||
- `--games N` - Number of games to generate (default: 500000)
|
||||
- `--positions-file FILE` - Use existing positions file instead of generating
|
||||
- `--stockfish PATH` - Path to Stockfish binary (default: `$STOCKFISH_PATH` or `/usr/games/stockfish`)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Train with latest checkpoint
|
||||
python nnue.py train
|
||||
|
||||
# Train from v2 with 100k games
|
||||
python nnue.py train --from-checkpoint 2 --games 100000
|
||||
|
||||
# Train with custom positions
|
||||
python nnue.py train --positions-file my_games.txt --stockfish /opt/stockfish/sf15
|
||||
```
|
||||
|
||||
### `export` - Export weights to Scala
|
||||
|
||||
```bash
|
||||
python nnue.py export WEIGHTS [output_path]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `WEIGHTS` - Version number (e.g., `2`) or full filename (e.g., `nnue_weights_v2.pt`)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Export version 2
|
||||
python nnue.py export 2
|
||||
|
||||
# Export with full filename
|
||||
python nnue.py export nnue_weights_v3.pt
|
||||
```
|
||||
|
||||
Output goes to `../src/main/scala/de/nowchess/bot/bots/nnue/NNUEWeights_vN.scala`
|
||||
|
||||
### `list` - List available checkpoints
|
||||
|
||||
```bash
|
||||
python nnue.py list
|
||||
```
|
||||
|
||||
Shows all available model versions with file sizes.
|
||||
|
||||
## Data Flow
|
||||
|
||||
1. **Generate** → `data/positions.txt`
|
||||
- Random chess positions from 8-20 move openings
|
||||
- Filters out checks, game-over states, and captures
|
||||
|
||||
2. **Label** → `data/training_data.jsonl`
|
||||
- Evaluates each position with Stockfish at depth 12
|
||||
- Stores FEN + evaluation in JSONL format
|
||||
|
||||
3. **Train** → `weights/nnue_weights_vN.pt`
|
||||
- Trains neural network on labeled positions
|
||||
- Auto-versioning (v1, v2, v3, etc.)
|
||||
- Saves metadata alongside weights
|
||||
|
||||
4. **Export** → `NNUEWeights_vN.scala`
|
||||
- Converts weights to Scala object
|
||||
- Ready for integration into bot
|
||||
|
||||
## Versioning
|
||||
|
||||
- Models are automatically versioned (v1, v2, v3, etc.)
|
||||
- Each version gets a `_metadata.json` file with training info
|
||||
- Training from checkpoint uses latest version unless specified with `--from-checkpoint`
|
||||
|
||||
## Files
|
||||
|
||||
- `data/` and `weights/` are gitignored (local artifacts)
|
||||
- Documentation in `docs/` explains training, debugging, and incremental improvements
|
||||
- Source modules in `src/` are independent and can be imported for custom workflows
|
||||
@@ -0,0 +1,951 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Central NNUE pipeline TUI for training and exporting models."""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
from rich.panel import Panel
|
||||
from rich.prompt import Prompt, Confirm
|
||||
from rich import print as rprint
|
||||
|
||||
# Add src directory to path so we can import modules
|
||||
sys.path.insert(0, str(Path(__file__).parent / "src"))
|
||||
|
||||
from generate import play_random_game_and_collect_positions
|
||||
from label import label_positions_with_stockfish
|
||||
from train import train_nnue, burst_train, DEFAULT_HIDDEN_SIZES
|
||||
from export import export_to_nbai
|
||||
from tactical_positions_extractor import (
|
||||
download_and_extract_puzzle_db,
|
||||
extract_tactical_only
|
||||
)
|
||||
from lichess_importer import import_lichess_evals
|
||||
from dataset import (
|
||||
get_datasets_dir,
|
||||
list_datasets,
|
||||
next_dataset_version,
|
||||
load_dataset_metadata,
|
||||
create_dataset,
|
||||
extend_dataset,
|
||||
get_dataset_labeled_path,
|
||||
delete_dataset,
|
||||
show_datasets_table
|
||||
)
|
||||
|
||||
|
||||
def get_weights_dir():
|
||||
"""Get/create weights directory."""
|
||||
weights_dir = Path(__file__).parent / "weights"
|
||||
weights_dir.mkdir(exist_ok=True)
|
||||
return weights_dir
|
||||
|
||||
|
||||
def get_data_dir():
|
||||
"""Get/create legacy data directory (for migration)."""
|
||||
data_dir = Path(__file__).parent / "data"
|
||||
data_dir.mkdir(exist_ok=True)
|
||||
return data_dir
|
||||
|
||||
|
||||
def list_checkpoints():
|
||||
"""List available checkpoint versions."""
|
||||
weights_dir = get_weights_dir()
|
||||
checkpoints = sorted(weights_dir.glob("nnue_weights_v*.pt"))
|
||||
if not checkpoints:
|
||||
return []
|
||||
return [int(cp.stem.split("_v")[1]) for cp in checkpoints]
|
||||
|
||||
|
||||
def migrate_legacy_data():
|
||||
"""On first run, offer to import existing data/training_data.jsonl as ds_v1."""
|
||||
console = Console()
|
||||
data_dir = get_data_dir()
|
||||
legacy_file = data_dir / "training_data.jsonl"
|
||||
datasets = list_datasets()
|
||||
|
||||
# Only migrate if legacy data exists and no datasets exist yet
|
||||
if legacy_file.exists() and not datasets:
|
||||
console.print("\n[cyan]Legacy data detected: data/training_data.jsonl[/cyan]")
|
||||
console.print("[dim]Tip: Use 'Manage Training Data' menu to import it as ds_v1[/dim]")
|
||||
|
||||
|
||||
def show_header():
|
||||
"""Display application header."""
|
||||
console = Console()
|
||||
console.clear()
|
||||
console.print(
|
||||
Panel(
|
||||
"[bold cyan]🧠 NNUE Training Pipeline[/bold cyan]\n"
|
||||
"[dim]Neural Network Utility Evaluation - Dataset & Model Management[/dim]",
|
||||
border_style="cyan",
|
||||
padding=(1, 2),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def show_checkpoints_table():
|
||||
"""Display available checkpoints in a table."""
|
||||
console = Console()
|
||||
available = list_checkpoints()
|
||||
|
||||
if not available:
|
||||
console.print("[yellow]ℹ No model checkpoints found yet[/yellow]")
|
||||
return
|
||||
|
||||
table = Table(title="Available Model Checkpoints", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Version", style="dim")
|
||||
table.add_column("File Size", justify="right")
|
||||
table.add_column("Status", justify="center")
|
||||
|
||||
weights_dir = get_weights_dir()
|
||||
for v in sorted(available):
|
||||
weights_file = weights_dir / f"nnue_weights_v{v}.pt"
|
||||
if weights_file.exists():
|
||||
size = weights_file.stat().st_size / (1024**2)
|
||||
table.add_row(f"v{v}", f"{size:.1f} MB", "✓ Ready")
|
||||
else:
|
||||
table.add_row(f"v{v}", "?", "[red]✗ Missing[/red]")
|
||||
|
||||
console.print(table)
|
||||
|
||||
|
||||
def show_main_menu():
|
||||
"""Display and handle main menu."""
|
||||
console = Console()
|
||||
|
||||
# Migrate legacy data on first run
|
||||
migrate_legacy_data()
|
||||
|
||||
while True:
|
||||
show_header()
|
||||
show_checkpoints_table()
|
||||
|
||||
console.print("\n[bold]What would you like to do?[/bold]")
|
||||
console.print("[cyan]1[/cyan] - Manage Training Data")
|
||||
console.print("[cyan]2[/cyan] - Train Model")
|
||||
console.print("[cyan]3[/cyan] - Export Model")
|
||||
console.print("[cyan]4[/cyan] - Exit")
|
||||
|
||||
choice = Prompt.ask("\nSelect option", choices=["1", "2", "3", "4"])
|
||||
|
||||
if choice == "1":
|
||||
datasets_menu()
|
||||
elif choice == "2":
|
||||
training_menu()
|
||||
elif choice == "3":
|
||||
export_interactive()
|
||||
elif choice == "4":
|
||||
console.print("[yellow]👋 Goodbye![/yellow]")
|
||||
return
|
||||
|
||||
|
||||
def datasets_menu():
|
||||
"""Dataset management submenu."""
|
||||
console = Console()
|
||||
|
||||
while True:
|
||||
show_header()
|
||||
show_datasets_table(console)
|
||||
|
||||
console.print("\n[bold]Training Data Management[/bold]")
|
||||
console.print("[cyan]1[/cyan] - Create new dataset")
|
||||
console.print("[cyan]2[/cyan] - Extend existing dataset")
|
||||
console.print("[cyan]3[/cyan] - View all datasets")
|
||||
console.print("[cyan]4[/cyan] - Delete dataset")
|
||||
console.print("[cyan]5[/cyan] - Back")
|
||||
|
||||
choice = Prompt.ask("\nSelect option", choices=["1", "2", "3", "4", "5"])
|
||||
|
||||
if choice == "1":
|
||||
create_dataset_interactive()
|
||||
elif choice == "2":
|
||||
extend_dataset_interactive()
|
||||
elif choice == "3":
|
||||
show_header()
|
||||
show_datasets_table(console)
|
||||
Prompt.ask("\nPress Enter to continue")
|
||||
elif choice == "4":
|
||||
delete_dataset_interactive()
|
||||
elif choice == "5":
|
||||
return
|
||||
|
||||
|
||||
def create_dataset_interactive():
|
||||
"""Interactive dataset creation flow."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]📊 Create New Dataset[/bold cyan]")
|
||||
|
||||
sources = []
|
||||
combined_count = 0
|
||||
|
||||
# Allow user to add multiple sources
|
||||
while True:
|
||||
console.print("\n[bold]Add data source (repeat until done):[/bold]")
|
||||
console.print("[cyan]a[/cyan] - Generate random positions")
|
||||
console.print("[cyan]b[/cyan] - Import from file")
|
||||
console.print("[cyan]c[/cyan] - Extract Lichess tactical puzzles")
|
||||
console.print("[cyan]d[/cyan] - Import Lichess eval database (.jsonl.zst)")
|
||||
console.print("[cyan]e[/cyan] - Done adding sources")
|
||||
|
||||
choice = Prompt.ask("Select", choices=["a", "b", "c", "d", "e"])
|
||||
|
||||
if choice == "a":
|
||||
num_positions = int(Prompt.ask("Number of positions to generate", default="100000"))
|
||||
min_move = int(Prompt.ask("Minimum move number", default="1"))
|
||||
max_move = int(Prompt.ask("Maximum move number", default="50"))
|
||||
num_workers = int(Prompt.ask("Number of workers", default="8"))
|
||||
|
||||
console.print("[dim]Generating positions...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_positions.txt"
|
||||
count = play_random_game_and_collect_positions(
|
||||
str(temp_file),
|
||||
total_positions=num_positions,
|
||||
samples_per_game=1,
|
||||
min_move=min_move,
|
||||
max_move=max_move,
|
||||
num_workers=num_workers
|
||||
)
|
||||
if count > 0:
|
||||
sources.append({
|
||||
"type": "generated",
|
||||
"count": count,
|
||||
"params": {"num_positions": num_positions, "min_move": min_move, "max_move": max_move}
|
||||
})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions generated[/green]")
|
||||
else:
|
||||
console.print("[red]✗ Generation failed[/red]")
|
||||
|
||||
elif choice == "b":
|
||||
file_path = Prompt.ask("Path to FEN file")
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
count = sum(1 for _ in f)
|
||||
sources.append({"type": "file_import", "count": count, "path": file_path})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions from file[/green]")
|
||||
except FileNotFoundError:
|
||||
console.print(f"[red]✗ File not found: {file_path}[/red]")
|
||||
|
||||
elif choice == "c":
|
||||
max_puzzles = int(Prompt.ask("Maximum puzzles to extract", default="300000"))
|
||||
console.print("[dim]Extracting tactical positions...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_tactical.txt"
|
||||
try:
|
||||
csv_path = download_and_extract_puzzle_db(output_dir=str(Path(__file__).parent / "tactical_data"))
|
||||
if csv_path:
|
||||
count = extract_tactical_only(csv_path, str(temp_file), max_puzzles)
|
||||
sources.append({"type": "tactical", "count": count, "max_puzzles": max_puzzles})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} tactical positions extracted[/green]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Tactical extraction failed: {e}[/red]")
|
||||
|
||||
elif choice == "d":
|
||||
zst_path = Prompt.ask("Path to lichess_db_eval.jsonl.zst")
|
||||
max_pos = Prompt.ask("Max positions to import (blank = no limit)", default="")
|
||||
max_pos = int(max_pos) if max_pos.strip() else None
|
||||
min_depth = int(Prompt.ask("Minimum eval depth to accept", default="20"))
|
||||
console.print("[dim]Importing Lichess evals (this may take a while)...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_lichess.jsonl"
|
||||
temp_file.unlink(missing_ok=True)
|
||||
try:
|
||||
count = import_lichess_evals(
|
||||
input_path=zst_path,
|
||||
output_file=str(temp_file),
|
||||
max_positions=max_pos,
|
||||
min_depth=min_depth,
|
||||
)
|
||||
if count > 0:
|
||||
sources.append({
|
||||
"type": "lichess",
|
||||
"count": count,
|
||||
"params": {"min_depth": min_depth, "max_positions": max_pos},
|
||||
})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions imported from Lichess[/green]")
|
||||
else:
|
||||
console.print("[red]✗ No positions imported[/red]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Lichess import failed: {e}[/red]")
|
||||
|
||||
elif choice == "e":
|
||||
if not sources:
|
||||
console.print("[yellow]⚠ No sources added yet[/yellow]")
|
||||
continue
|
||||
break
|
||||
|
||||
if not sources:
|
||||
console.print("[yellow]Dataset creation cancelled[/yellow]")
|
||||
return
|
||||
|
||||
# Determine whether any sources still need Stockfish labeling.
|
||||
# Lichess sources are already labeled; only generated/tactical/file sources need it.
|
||||
needs_labeling = any(s["type"] != "lichess" for s in sources)
|
||||
|
||||
stockfish_depth = 12
|
||||
if needs_labeling:
|
||||
console.print("\n[bold cyan]🏷️ Labeling Parameters[/bold cyan]")
|
||||
stockfish_path = Prompt.ask(
|
||||
"Stockfish path",
|
||||
default=os.environ.get("STOCKFISH_PATH") or shutil.which("stockfish") or "/usr/bin/stockfish"
|
||||
)
|
||||
stockfish_depth = int(Prompt.ask("Stockfish analysis depth", default="12"))
|
||||
num_workers = int(Prompt.ask("Number of parallel workers", default="1"))
|
||||
|
||||
# Summary and confirm
|
||||
console.print("\n[bold]Dataset Summary:[/bold]")
|
||||
console.print(f" Total positions: {combined_count:,}")
|
||||
for source in sources:
|
||||
console.print(f" - {source['type']}: {source['count']:,}")
|
||||
if needs_labeling:
|
||||
console.print(f" Stockfish depth: {stockfish_depth}")
|
||||
|
||||
if not Confirm.ask("\nProceed to create dataset?", default=True):
|
||||
console.print("[yellow]Cancelled[/yellow]")
|
||||
return
|
||||
|
||||
try:
|
||||
labeled_file = Path(tempfile.gettempdir()) / "labeled.jsonl"
|
||||
labeled_file.unlink(missing_ok=True)
|
||||
|
||||
# --- Step 1: Collect already-labeled data (Lichess source) ---
|
||||
lichess_tmp = Path(tempfile.gettempdir()) / "temp_lichess.jsonl"
|
||||
if lichess_tmp.exists():
|
||||
import shutil as _shutil
|
||||
_shutil.copy(lichess_tmp, labeled_file)
|
||||
console.print(f"\n[bold cyan]Step 1: Pre-labeled data copied[/bold cyan]")
|
||||
console.print(f"[green]✓ Lichess positions ready[/green]")
|
||||
|
||||
# --- Step 2: Combine unlabeled sources and run Stockfish (if any) ---
|
||||
non_lichess = [s for s in sources if s["type"] != "lichess"]
|
||||
if non_lichess:
|
||||
console.print("\n[bold cyan]Step 2: Combining unlabeled sources[/bold cyan]")
|
||||
combined_fen_file = Path(tempfile.gettempdir()) / "combined_positions.txt"
|
||||
all_fens = set()
|
||||
|
||||
for source in non_lichess:
|
||||
if source["type"] == "generated":
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_positions.txt"
|
||||
elif source["type"] == "file_import":
|
||||
temp_file = Path(source["path"])
|
||||
elif source["type"] == "tactical":
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_tactical.txt"
|
||||
else:
|
||||
continue
|
||||
|
||||
if temp_file.exists():
|
||||
with open(temp_file, "r") as f:
|
||||
for line in f:
|
||||
fen = line.strip()
|
||||
if fen:
|
||||
all_fens.add(fen)
|
||||
|
||||
with open(combined_fen_file, "w") as f:
|
||||
for fen in all_fens:
|
||||
f.write(fen + "\n")
|
||||
console.print(f"[green]✓ Combined {len(all_fens):,} unique unlabeled positions[/green]")
|
||||
|
||||
console.print("\n[bold cyan]Step 2b: Labeling with Stockfish[/bold cyan]")
|
||||
success = label_positions_with_stockfish(
|
||||
str(combined_fen_file),
|
||||
str(labeled_file),
|
||||
stockfish_path,
|
||||
depth=stockfish_depth,
|
||||
num_workers=num_workers,
|
||||
)
|
||||
if not success:
|
||||
console.print("[red]✗ Stockfish labeling failed[/red]")
|
||||
return
|
||||
console.print("[green]✓ Positions labeled[/green]")
|
||||
|
||||
# --- Step 3: Create dataset ---
|
||||
console.print("\n[bold cyan]Step 3: Creating Dataset[/bold cyan]")
|
||||
version = next_dataset_version()
|
||||
create_dataset(
|
||||
version=version,
|
||||
labeled_jsonl_path=str(labeled_file),
|
||||
sources=sources,
|
||||
stockfish_depth=stockfish_depth,
|
||||
)
|
||||
console.print(f"[green]✓ Dataset created: ds_v{version}[/green]")
|
||||
console.print(f"[bold]Location: {get_datasets_dir() / f'ds_v{version}'}[/bold]")
|
||||
|
||||
Prompt.ask("\nPress Enter to continue")
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Error: {e}[/red]")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def extend_dataset_interactive():
|
||||
"""Interactive dataset extension flow."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]📊 Extend Existing Dataset[/bold cyan]")
|
||||
|
||||
datasets = list_datasets()
|
||||
if not datasets:
|
||||
console.print("[yellow]ℹ No datasets available to extend[/yellow]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
show_datasets_table(console)
|
||||
version = int(Prompt.ask("\nEnter dataset version to extend (e.g., 1)"))
|
||||
|
||||
if not any(v == version for v, _ in datasets):
|
||||
console.print("[red]✗ Dataset not found[/red]")
|
||||
return
|
||||
|
||||
sources = []
|
||||
combined_count = 0
|
||||
|
||||
# Allow user to add sources
|
||||
while True:
|
||||
console.print("\n[bold]Add data source:[/bold]")
|
||||
console.print("[cyan]a[/cyan] - Generate random positions")
|
||||
console.print("[cyan]b[/cyan] - Import from file")
|
||||
console.print("[cyan]c[/cyan] - Extract Lichess tactical puzzles")
|
||||
console.print("[cyan]d[/cyan] - Import Lichess eval database (.jsonl.zst)")
|
||||
console.print("[cyan]e[/cyan] - Done adding sources")
|
||||
|
||||
choice = Prompt.ask("Select", choices=["a", "b", "c", "d", "e"])
|
||||
|
||||
if choice == "a":
|
||||
num_positions = int(Prompt.ask("Number of positions to generate", default="100000"))
|
||||
min_move = int(Prompt.ask("Minimum move number", default="1"))
|
||||
max_move = int(Prompt.ask("Maximum move number", default="50"))
|
||||
num_workers = int(Prompt.ask("Number of workers", default="8"))
|
||||
|
||||
console.print("[dim]Generating positions...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_positions.txt"
|
||||
count = play_random_game_and_collect_positions(
|
||||
str(temp_file),
|
||||
total_positions=num_positions,
|
||||
samples_per_game=1,
|
||||
min_move=min_move,
|
||||
max_move=max_move,
|
||||
num_workers=num_workers
|
||||
)
|
||||
if count > 0:
|
||||
sources.append({
|
||||
"type": "generated",
|
||||
"count": count,
|
||||
"params": {"num_positions": num_positions, "min_move": min_move, "max_move": max_move}
|
||||
})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions generated[/green]")
|
||||
|
||||
elif choice == "b":
|
||||
file_path = Prompt.ask("Path to FEN file")
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
count = sum(1 for _ in f)
|
||||
sources.append({"type": "file_import", "count": count, "path": file_path})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions from file[/green]")
|
||||
except FileNotFoundError:
|
||||
console.print(f"[red]✗ File not found: {file_path}[/red]")
|
||||
|
||||
elif choice == "c":
|
||||
max_puzzles = int(Prompt.ask("Maximum puzzles to extract", default="300000"))
|
||||
console.print("[dim]Extracting tactical positions...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_tactical.txt"
|
||||
try:
|
||||
csv_path = download_and_extract_puzzle_db(output_dir=str(Path(__file__).parent / "tactical_data"))
|
||||
if csv_path:
|
||||
count = extract_tactical_only(csv_path, str(temp_file), max_puzzles)
|
||||
sources.append({"type": "tactical", "count": count, "max_puzzles": max_puzzles})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} tactical positions extracted[/green]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Extraction failed: {e}[/red]")
|
||||
|
||||
elif choice == "d":
|
||||
zst_path = Prompt.ask("Path to lichess_db_eval.jsonl.zst")
|
||||
max_pos = Prompt.ask("Max positions to import (blank = no limit)", default="")
|
||||
max_pos = int(max_pos) if max_pos.strip() else None
|
||||
min_depth = int(Prompt.ask("Minimum eval depth to accept", default="20"))
|
||||
console.print("[dim]Importing Lichess evals (this may take a while)...[/dim]")
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_lichess.jsonl"
|
||||
temp_file.unlink(missing_ok=True)
|
||||
try:
|
||||
count = import_lichess_evals(
|
||||
input_path=zst_path,
|
||||
output_file=str(temp_file),
|
||||
max_positions=max_pos,
|
||||
min_depth=min_depth,
|
||||
)
|
||||
if count > 0:
|
||||
sources.append({
|
||||
"type": "lichess",
|
||||
"count": count,
|
||||
"params": {"min_depth": min_depth, "max_positions": max_pos},
|
||||
})
|
||||
combined_count += count
|
||||
console.print(f"[green]✓ {count:,} positions imported from Lichess[/green]")
|
||||
else:
|
||||
console.print("[red]✗ No positions imported[/red]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Lichess import failed: {e}[/red]")
|
||||
|
||||
elif choice == "e":
|
||||
if not sources:
|
||||
console.print("[yellow]⚠ No sources added yet[/yellow]")
|
||||
continue
|
||||
break
|
||||
|
||||
if not sources:
|
||||
console.print("[yellow]Extension cancelled[/yellow]")
|
||||
return
|
||||
|
||||
needs_labeling = any(s["type"] != "lichess" for s in sources)
|
||||
|
||||
stockfish_depth = 12
|
||||
if needs_labeling:
|
||||
console.print("\n[bold cyan]🏷️ Labeling Parameters[/bold cyan]")
|
||||
stockfish_path = Prompt.ask(
|
||||
"Stockfish path",
|
||||
default=os.environ.get("STOCKFISH_PATH") or shutil.which("stockfish") or "/usr/bin/stockfish"
|
||||
)
|
||||
stockfish_depth = int(Prompt.ask("Stockfish analysis depth", default="12"))
|
||||
num_workers = int(Prompt.ask("Number of parallel workers", default="1"))
|
||||
|
||||
# Summary and confirm
|
||||
console.print("\n[bold]Extension Summary:[/bold]")
|
||||
console.print(f" Target dataset: ds_v{version}")
|
||||
console.print(f" New positions: {combined_count:,}")
|
||||
for source in sources:
|
||||
console.print(f" - {source['type']}: {source['count']:,}")
|
||||
if needs_labeling:
|
||||
console.print(f" Stockfish depth: {stockfish_depth}")
|
||||
|
||||
if not Confirm.ask("\nProceed to extend dataset?", default=True):
|
||||
console.print("[yellow]Cancelled[/yellow]")
|
||||
return
|
||||
|
||||
try:
|
||||
labeled_file = Path(tempfile.gettempdir()) / "labeled.jsonl"
|
||||
labeled_file.unlink(missing_ok=True)
|
||||
|
||||
# Copy pre-labeled Lichess data if present
|
||||
lichess_tmp = Path(tempfile.gettempdir()) / "temp_lichess.jsonl"
|
||||
if lichess_tmp.exists():
|
||||
import shutil as _shutil
|
||||
_shutil.copy(lichess_tmp, labeled_file)
|
||||
console.print(f"\n[bold cyan]Step 1: Pre-labeled data copied[/bold cyan]")
|
||||
console.print(f"[green]✓ Lichess positions ready[/green]")
|
||||
|
||||
# Combine and label remaining sources with Stockfish
|
||||
non_lichess = [s for s in sources if s["type"] != "lichess"]
|
||||
if non_lichess:
|
||||
console.print("\n[bold cyan]Step 2: Combining unlabeled sources[/bold cyan]")
|
||||
combined_fen_file = Path(tempfile.gettempdir()) / "combined_positions.txt"
|
||||
all_fens = set()
|
||||
|
||||
for source in non_lichess:
|
||||
if source["type"] == "generated":
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_positions.txt"
|
||||
elif source["type"] == "file_import":
|
||||
temp_file = Path(source["path"])
|
||||
elif source["type"] == "tactical":
|
||||
temp_file = Path(tempfile.gettempdir()) / "temp_tactical.txt"
|
||||
else:
|
||||
continue
|
||||
if temp_file.exists():
|
||||
with open(temp_file, "r") as f:
|
||||
for line in f:
|
||||
fen = line.strip()
|
||||
if fen:
|
||||
all_fens.add(fen)
|
||||
|
||||
with open(combined_fen_file, "w") as f:
|
||||
for fen in all_fens:
|
||||
f.write(fen + "\n")
|
||||
console.print(f"[green]✓ Combined {len(all_fens):,} unique unlabeled positions[/green]")
|
||||
|
||||
console.print("\n[bold cyan]Step 2b: Labeling with Stockfish[/bold cyan]")
|
||||
success = label_positions_with_stockfish(
|
||||
str(combined_fen_file),
|
||||
str(labeled_file),
|
||||
stockfish_path,
|
||||
depth=stockfish_depth,
|
||||
num_workers=num_workers,
|
||||
)
|
||||
if not success:
|
||||
console.print("[red]✗ Stockfish labeling failed[/red]")
|
||||
return
|
||||
console.print("[green]✓ Positions labeled[/green]")
|
||||
|
||||
# Extend dataset
|
||||
console.print("\n[bold cyan]Step 3: Extending Dataset[/bold cyan]")
|
||||
success = extend_dataset(
|
||||
version=version,
|
||||
new_labeled_path=str(labeled_file),
|
||||
new_source_entry={
|
||||
"type": "merged_sources",
|
||||
"count": combined_count,
|
||||
"sources": sources,
|
||||
}
|
||||
)
|
||||
|
||||
if success:
|
||||
metadata = load_dataset_metadata(version)
|
||||
console.print(f"[green]✓ Dataset extended[/green]")
|
||||
console.print(f"[bold]Total positions: {metadata['total_positions']:,}[/bold]")
|
||||
else:
|
||||
console.print("[red]✗ Extension failed[/red]")
|
||||
|
||||
Prompt.ask("\nPress Enter to continue")
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Error: {e}[/red]")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def delete_dataset_interactive():
|
||||
"""Interactive dataset deletion."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]⚠️ Delete Dataset[/bold cyan]")
|
||||
|
||||
datasets = list_datasets()
|
||||
if not datasets:
|
||||
console.print("[yellow]ℹ No datasets to delete[/yellow]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
show_datasets_table(console)
|
||||
version = int(Prompt.ask("\nEnter dataset version to delete (e.g., 1)"))
|
||||
|
||||
if not any(v == version for v, _ in datasets):
|
||||
console.print("[red]✗ Dataset not found[/red]")
|
||||
return
|
||||
|
||||
if Confirm.ask(f"Delete ds_v{version}? This cannot be undone.", default=False):
|
||||
if delete_dataset(version):
|
||||
console.print(f"[green]✓ Dataset ds_v{version} deleted[/green]")
|
||||
else:
|
||||
console.print("[red]✗ Deletion failed[/red]")
|
||||
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def training_menu():
|
||||
"""Training submenu."""
|
||||
console = Console()
|
||||
|
||||
while True:
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold]Training[/bold]")
|
||||
console.print("[cyan]1[/cyan] - Standard Training")
|
||||
console.print("[cyan]2[/cyan] - Burst Training")
|
||||
console.print("[cyan]3[/cyan] - View Model Checkpoints")
|
||||
console.print("[cyan]4[/cyan] - Back")
|
||||
|
||||
choice = Prompt.ask("\nSelect option", choices=["1", "2", "3", "4"])
|
||||
|
||||
if choice == "1":
|
||||
train_interactive()
|
||||
elif choice == "2":
|
||||
burst_train_interactive()
|
||||
elif choice == "3":
|
||||
show_header()
|
||||
show_checkpoints_table()
|
||||
Prompt.ask("\nPress Enter to continue")
|
||||
elif choice == "4":
|
||||
return
|
||||
|
||||
|
||||
def train_interactive():
|
||||
"""Interactive training menu."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]📚 Standard Training Configuration[/bold cyan]")
|
||||
|
||||
# Dataset selection
|
||||
datasets = list_datasets()
|
||||
if not datasets:
|
||||
console.print("[red]✗ No datasets available. Create one first.[/red]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
console.print("\n[bold]Available Datasets:[/bold]")
|
||||
show_datasets_table(console)
|
||||
dataset_version = int(Prompt.ask("\nEnter dataset version to train on (e.g., 1)"))
|
||||
|
||||
if not any(v == dataset_version for v, _ in datasets):
|
||||
console.print("[red]✗ Dataset not found[/red]")
|
||||
return
|
||||
|
||||
labeled_file = get_dataset_labeled_path(dataset_version)
|
||||
if not labeled_file:
|
||||
console.print("[red]✗ Dataset labeled.jsonl not found[/red]")
|
||||
return
|
||||
|
||||
# Checkpoint selection
|
||||
available = list_checkpoints()
|
||||
use_checkpoint = False
|
||||
checkpoint_version = None
|
||||
|
||||
if available:
|
||||
console.print(f"\n[dim]Available checkpoints: {', '.join([f'v{v}' for v in sorted(available)])}[/dim]")
|
||||
use_checkpoint = Confirm.ask("Start from an existing checkpoint?", default=False)
|
||||
if use_checkpoint:
|
||||
checkpoint_version = Prompt.ask(
|
||||
"Enter checkpoint version",
|
||||
default=str(max(available))
|
||||
)
|
||||
|
||||
# Training parameters
|
||||
epochs = int(Prompt.ask("Number of epochs", default="100"))
|
||||
batch_size = int(Prompt.ask("Batch size", default="16384"))
|
||||
subsample_ratio = float(Prompt.ask("Stochastic subsample ratio per epoch (1.0 = all data)", default="1.0"))
|
||||
default_layers = ",".join(str(s) for s in DEFAULT_HIDDEN_SIZES)
|
||||
hidden_layers_str = Prompt.ask(
|
||||
"Hidden layer sizes (comma-separated, e.g. 1536,1024,512,256)",
|
||||
default=default_layers
|
||||
)
|
||||
hidden_sizes = [int(x.strip()) for x in hidden_layers_str.split(",") if x.strip()]
|
||||
early_stopping = None
|
||||
if Confirm.ask("Enable early stopping?", default=False):
|
||||
early_stopping = int(Prompt.ask("Patience (epochs)", default="5"))
|
||||
|
||||
arch_str = " → ".join(str(s) for s in [768] + hidden_sizes + [1])
|
||||
|
||||
# Confirm and start
|
||||
console.print("\n[bold]Configuration Summary:[/bold]")
|
||||
console.print(f" Dataset: ds_v{dataset_version}")
|
||||
console.print(f" Architecture: {arch_str}")
|
||||
console.print(f" Epochs: {epochs}")
|
||||
console.print(f" Batch size: {batch_size}")
|
||||
console.print(f" Subsample ratio: {subsample_ratio:.0%}")
|
||||
if early_stopping:
|
||||
console.print(f" Early stopping: Yes (patience: {early_stopping})")
|
||||
else:
|
||||
console.print(f" Early stopping: No")
|
||||
if use_checkpoint:
|
||||
console.print(f" Checkpoint: v{checkpoint_version}")
|
||||
else:
|
||||
console.print(f" Checkpoint: None (training from scratch)")
|
||||
|
||||
if not Confirm.ask("\nStart training?", default=True):
|
||||
console.print("[yellow]Training cancelled[/yellow]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
# Execute training
|
||||
weights_dir = get_weights_dir()
|
||||
|
||||
try:
|
||||
console.print("\n[bold cyan]Training Model[/bold cyan]")
|
||||
checkpoint = None
|
||||
if use_checkpoint:
|
||||
checkpoint = str(weights_dir / f"nnue_weights_v{checkpoint_version}.pt")
|
||||
|
||||
train_nnue(
|
||||
data_file=str(labeled_file),
|
||||
output_file=str(weights_dir / "nnue_weights.pt"),
|
||||
epochs=epochs,
|
||||
batch_size=batch_size,
|
||||
checkpoint=checkpoint,
|
||||
use_versioning=True,
|
||||
early_stopping_patience=early_stopping,
|
||||
subsample_ratio=subsample_ratio,
|
||||
hidden_sizes=hidden_sizes,
|
||||
)
|
||||
console.print("[green]✓ Training complete[/green]")
|
||||
|
||||
# Show result
|
||||
available = list_checkpoints()
|
||||
new_version = max(available) if available else 1
|
||||
console.print(f"\n[bold green]✓ Training successful![/bold green]")
|
||||
console.print(f"[bold]New checkpoint: v{new_version}[/bold]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Error: {e}[/red]")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def burst_train_interactive():
|
||||
"""Interactive burst training menu."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]⚡ Burst Training Configuration[/bold cyan]")
|
||||
console.print("[dim]Repeatedly restarts from the best checkpoint until the time budget expires.[/dim]\n")
|
||||
|
||||
# Dataset selection
|
||||
datasets = list_datasets()
|
||||
if not datasets:
|
||||
console.print("[red]✗ No datasets available. Create one first.[/red]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
console.print("[bold]Available Datasets:[/bold]")
|
||||
show_datasets_table(console)
|
||||
dataset_version = int(Prompt.ask("\nEnter dataset version to train on (e.g., 1)"))
|
||||
|
||||
if not any(v == dataset_version for v, _ in datasets):
|
||||
console.print("[red]✗ Dataset not found[/red]")
|
||||
return
|
||||
|
||||
labeled_file = get_dataset_labeled_path(dataset_version)
|
||||
if not labeled_file:
|
||||
console.print("[red]✗ Dataset labeled.jsonl not found[/red]")
|
||||
return
|
||||
|
||||
duration_minutes = float(Prompt.ask("Training budget (minutes)", default="60"))
|
||||
epochs_per_season = int(Prompt.ask("Max epochs per season", default="50"))
|
||||
early_stopping_patience = int(Prompt.ask("Early stopping patience (epochs)", default="10"))
|
||||
|
||||
# Optional initial checkpoint
|
||||
available = list_checkpoints()
|
||||
checkpoint = None
|
||||
if available:
|
||||
console.print(f"\n[dim]Available checkpoints: {', '.join([f'v{v}' for v in sorted(available)])}[/dim]")
|
||||
if Confirm.ask("Start from an existing checkpoint?", default=False):
|
||||
version = Prompt.ask("Enter checkpoint version", default=str(max(available)))
|
||||
checkpoint = str(get_weights_dir() / f"nnue_weights_v{version}.pt")
|
||||
|
||||
# Training hyperparameters
|
||||
batch_size = int(Prompt.ask("Batch size", default="16384"))
|
||||
subsample_ratio = float(Prompt.ask("Stochastic subsample ratio per epoch (1.0 = all data)", default="1.0"))
|
||||
default_layers = ",".join(str(s) for s in DEFAULT_HIDDEN_SIZES)
|
||||
hidden_layers_str = Prompt.ask(
|
||||
"Hidden layer sizes (comma-separated, e.g. 1536,1024,512,256)",
|
||||
default=default_layers
|
||||
)
|
||||
hidden_sizes = [int(x.strip()) for x in hidden_layers_str.split(",") if x.strip()]
|
||||
arch_str = " → ".join(str(s) for s in [768] + hidden_sizes + [1])
|
||||
|
||||
# Summary
|
||||
console.print("\n[bold]Configuration Summary:[/bold]")
|
||||
console.print(f" Dataset: ds_v{dataset_version}")
|
||||
console.print(f" Architecture: {arch_str}")
|
||||
console.print(f" Duration: {duration_minutes:.0f} minutes")
|
||||
console.print(f" Epochs per season: {epochs_per_season}")
|
||||
console.print(f" Patience: {early_stopping_patience}")
|
||||
console.print(f" Batch size: {batch_size}")
|
||||
console.print(f" Subsample ratio: {subsample_ratio:.0%}")
|
||||
console.print(f" Checkpoint: {checkpoint or 'None (from scratch)'}")
|
||||
|
||||
if not Confirm.ask("\nStart burst training?", default=True):
|
||||
console.print("[yellow]Burst training cancelled[/yellow]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
weights_dir = get_weights_dir()
|
||||
|
||||
try:
|
||||
console.print("\n[bold cyan]Burst Training[/bold cyan]")
|
||||
burst_train(
|
||||
data_file=str(labeled_file),
|
||||
output_file=str(weights_dir / "nnue_weights.pt"),
|
||||
duration_minutes=duration_minutes,
|
||||
epochs_per_season=epochs_per_season,
|
||||
early_stopping_patience=early_stopping_patience,
|
||||
batch_size=batch_size,
|
||||
initial_checkpoint=checkpoint,
|
||||
use_versioning=True,
|
||||
subsample_ratio=subsample_ratio,
|
||||
hidden_sizes=hidden_sizes,
|
||||
)
|
||||
console.print("[green]✓ Burst training complete[/green]")
|
||||
|
||||
available = list_checkpoints()
|
||||
if available:
|
||||
console.print(f"[bold]Latest checkpoint: v{max(available)}[/bold]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Error: {e}[/red]")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def export_interactive():
|
||||
"""Interactive export menu."""
|
||||
console = Console()
|
||||
show_header()
|
||||
|
||||
console.print("\n[bold cyan]📦 Export Configuration[/bold cyan]")
|
||||
|
||||
# Select weights version
|
||||
available = list_checkpoints()
|
||||
if not available:
|
||||
console.print("[red]✗ No checkpoints available to export[/red]")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
return
|
||||
|
||||
console.print(f"[dim]Available versions: {', '.join([f'v{v}' for v in sorted(available)])}[/dim]")
|
||||
version = Prompt.ask("Enter version to export (e.g., 2)")
|
||||
|
||||
weights_file = f"nnue_weights_v{version}.pt"
|
||||
output_file = str(Path(__file__).parent.parent / "src" / "main" / "resources" / "nnue_weights.nbai")
|
||||
|
||||
console.print(f"\n[bold]Export Configuration:[/bold]")
|
||||
console.print(f" Source: {weights_file}")
|
||||
console.print(f" Destination: {output_file}")
|
||||
|
||||
if not Confirm.ask("\nExport weights?", default=True):
|
||||
console.print("[yellow]Export cancelled[/yellow]")
|
||||
return
|
||||
|
||||
try:
|
||||
weights_dir = get_weights_dir()
|
||||
weights_path = weights_dir / weights_file
|
||||
|
||||
if not weights_path.exists():
|
||||
console.print(f"[red]✗ {weights_file} not found[/red]")
|
||||
return
|
||||
|
||||
console.print("\n[bold cyan]Exporting Weights[/bold cyan]")
|
||||
export_to_nbai(str(weights_path), output_file)
|
||||
console.print(f"\n[green]✓ Export complete![/green]")
|
||||
console.print(f"[bold]Weights saved to:[/bold] {output_file}")
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]✗ Error: {e}[/red]")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Prompt.ask("Press Enter to continue")
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
show_main_menu()
|
||||
return 0
|
||||
except KeyboardInterrupt:
|
||||
console = Console()
|
||||
console.print("\n[yellow]Interrupted by user[/yellow]")
|
||||
return 1
|
||||
except Exception as e:
|
||||
console = Console()
|
||||
console.print(f"[red]Error:[/red] {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -0,0 +1,6 @@
|
||||
chess==1.11.2
|
||||
torch==2.11.0
|
||||
tqdm==4.67.3
|
||||
numpy==2.4.4
|
||||
rich==13.7.0
|
||||
zstandard==0.23.0
|
||||
@@ -0,0 +1,66 @@
|
||||
@echo off
|
||||
REM NNUE Training Pipeline for Windows
|
||||
|
||||
setlocal enabledelayedexpansion
|
||||
|
||||
echo.
|
||||
echo === NNUE Training Pipeline ===
|
||||
echo.
|
||||
|
||||
REM Get the directory where this script is located
|
||||
set SCRIPT_DIR=%~dp0
|
||||
|
||||
cd /d "%SCRIPT_DIR%"
|
||||
|
||||
REM Step 1: Generate positions
|
||||
echo Step 1: Generating 500,000 random positions...
|
||||
python generate_positions.py positions.txt
|
||||
if not exist positions.txt (
|
||||
echo ERROR: positions.txt not created
|
||||
exit /b 1
|
||||
)
|
||||
echo [OK] Positions generated
|
||||
echo.
|
||||
|
||||
REM Step 2: Label positions with Stockfish
|
||||
echo Step 2: Labeling positions with Stockfish (depth 12^)...
|
||||
if "%STOCKFISH_PATH%"=="" (
|
||||
set STOCKFISH_PATH=stockfish
|
||||
)
|
||||
python label_positions.py positions.txt training_data.jsonl "%STOCKFISH_PATH%"
|
||||
if not exist training_data.jsonl (
|
||||
echo ERROR: training_data.jsonl not created
|
||||
exit /b 1
|
||||
)
|
||||
echo [OK] Positions labeled
|
||||
echo.
|
||||
|
||||
REM Step 3: Train NNUE model
|
||||
echo Step 3: Training NNUE model (20 epochs^)...
|
||||
python train_nnue.py training_data.jsonl nnue_weights.pt
|
||||
if not exist nnue_weights.pt (
|
||||
echo ERROR: nnue_weights.pt not created
|
||||
exit /b 1
|
||||
)
|
||||
echo [OK] Model trained
|
||||
echo.
|
||||
|
||||
REM Step 4: Export weights to Scala
|
||||
echo Step 4: Exporting weights to Scala...
|
||||
python export_weights.py nnue_weights.pt ..\src\main\scala\de\nowchess\bot\bots\nnue\NNUEWeights.scala
|
||||
if not exist ..\src\main\scala\de\nowchess\bot\bots\nnue\NNUEWeights.scala (
|
||||
echo ERROR: NNUEWeights.scala not created
|
||||
exit /b 1
|
||||
)
|
||||
echo [OK] Weights exported
|
||||
echo.
|
||||
|
||||
echo === Pipeline Complete ===
|
||||
echo.
|
||||
echo Next steps:
|
||||
echo 1. Navigate to project root: cd ..\..
|
||||
echo 2. Compile: .\compile.bat
|
||||
echo 3. Test: .\test.bat
|
||||
echo.
|
||||
|
||||
endlocal
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
|
||||
# NNUE Training Pipeline (bash version)
|
||||
# Uses the central CLI (nnue.py) for all operations
|
||||
# Works on Linux, macOS, and Windows (with Git Bash or WSL)
|
||||
|
||||
set -e # Exit on error
|
||||
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Use python or python3 (check which is available)
|
||||
PYTHON_CMD="python3"
|
||||
if ! command -v python3 &> /dev/null; then
|
||||
PYTHON_CMD="python"
|
||||
fi
|
||||
|
||||
echo "=== NNUE Training Pipeline ==="
|
||||
echo ""
|
||||
echo "Python command: $PYTHON_CMD"
|
||||
echo "Working directory: $SCRIPT_DIR"
|
||||
echo ""
|
||||
|
||||
# Run the unified training pipeline
|
||||
$PYTHON_CMD nnue.py train
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo ""
|
||||
echo "ERROR: Training pipeline failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== Pipeline Complete ==="
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Navigate to project root: cd ../.."
|
||||
echo "2. Compile: ./compile"
|
||||
echo "3. Test: ./test"
|
||||
@@ -0,0 +1,287 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Dataset versioning and management for NNUE training data."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict, List, Tuple
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
|
||||
def get_datasets_dir() -> Path:
|
||||
"""Get/create datasets directory."""
|
||||
datasets_dir = Path(__file__).parent.parent / "datasets"
|
||||
datasets_dir.mkdir(exist_ok=True)
|
||||
return datasets_dir
|
||||
|
||||
|
||||
def next_dataset_version() -> int:
|
||||
"""Find the next available dataset version number."""
|
||||
datasets_dir = get_datasets_dir()
|
||||
versions = []
|
||||
|
||||
for d in datasets_dir.iterdir():
|
||||
if d.is_dir() and d.name.startswith("ds_v"):
|
||||
try:
|
||||
v = int(d.name.split("_v")[1])
|
||||
versions.append(v)
|
||||
except (ValueError, IndexError):
|
||||
pass
|
||||
|
||||
return max(versions) + 1 if versions else 1
|
||||
|
||||
|
||||
def list_datasets() -> List[Tuple[int, Dict]]:
|
||||
"""List all datasets with their metadata.
|
||||
|
||||
Returns:
|
||||
List of (version, metadata_dict) tuples, sorted by version.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
datasets = []
|
||||
|
||||
for d in datasets_dir.iterdir():
|
||||
if d.is_dir() and d.name.startswith("ds_v"):
|
||||
try:
|
||||
v = int(d.name.split("_v")[1])
|
||||
metadata_file = d / "metadata.json"
|
||||
if metadata_file.exists():
|
||||
with open(metadata_file, 'r') as f:
|
||||
metadata = json.load(f)
|
||||
datasets.append((v, metadata))
|
||||
except (ValueError, IndexError, json.JSONDecodeError):
|
||||
pass
|
||||
|
||||
return sorted(datasets, key=lambda x: x[0])
|
||||
|
||||
|
||||
def load_dataset_metadata(version: int) -> Optional[Dict]:
|
||||
"""Load metadata for a specific dataset version.
|
||||
|
||||
Returns:
|
||||
Metadata dict or None if not found.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
metadata_file = datasets_dir / f"ds_v{version}" / "metadata.json"
|
||||
|
||||
if not metadata_file.exists():
|
||||
return None
|
||||
|
||||
with open(metadata_file, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def save_dataset_metadata(version: int, metadata: Dict) -> None:
|
||||
"""Save metadata for a dataset version."""
|
||||
datasets_dir = get_datasets_dir()
|
||||
dataset_dir = datasets_dir / f"ds_v{version}"
|
||||
dataset_dir.mkdir(exist_ok=True)
|
||||
|
||||
metadata_file = dataset_dir / "metadata.json"
|
||||
with open(metadata_file, 'w') as f:
|
||||
json.dump(metadata, f, indent=2, default=str)
|
||||
|
||||
|
||||
def create_dataset(
|
||||
version: int,
|
||||
labeled_jsonl_path: str,
|
||||
sources: List[Dict],
|
||||
stockfish_depth: int = 12
|
||||
) -> Path:
|
||||
"""Create a new versioned dataset.
|
||||
|
||||
Args:
|
||||
version: Dataset version number
|
||||
labeled_jsonl_path: Path to labeled.jsonl to copy
|
||||
sources: List of source dicts (see plan for schema)
|
||||
stockfish_depth: Depth used for labeling
|
||||
|
||||
Returns:
|
||||
Path to the created dataset directory.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
dataset_dir = datasets_dir / f"ds_v{version}"
|
||||
dataset_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Copy labeled data with deduplication (in case source has duplicates)
|
||||
source_path = Path(labeled_jsonl_path)
|
||||
if source_path.exists():
|
||||
dest_path = dataset_dir / "labeled.jsonl"
|
||||
seen_fens = set()
|
||||
unique_count = 0
|
||||
|
||||
with open(source_path, 'r') as src, open(dest_path, 'w') as dst:
|
||||
for line in src:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
fen = data.get('fen')
|
||||
if fen and fen not in seen_fens:
|
||||
dst.write(line)
|
||||
seen_fens.add(fen)
|
||||
unique_count += 1
|
||||
except json.JSONDecodeError:
|
||||
# Skip malformed lines
|
||||
pass
|
||||
|
||||
# Count positions
|
||||
total_positions = 0
|
||||
if (dataset_dir / "labeled.jsonl").exists():
|
||||
with open(dataset_dir / "labeled.jsonl", 'r') as f:
|
||||
total_positions = sum(1 for _ in f)
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"version": version,
|
||||
"created": datetime.now().isoformat(),
|
||||
"total_positions": total_positions,
|
||||
"stockfish_depth": stockfish_depth,
|
||||
"sources": sources
|
||||
}
|
||||
|
||||
save_dataset_metadata(version, metadata)
|
||||
return dataset_dir
|
||||
|
||||
|
||||
def extend_dataset(
|
||||
version: int,
|
||||
new_labeled_path: str,
|
||||
new_source_entry: Dict
|
||||
) -> bool:
|
||||
"""Extend an existing dataset with new labeled positions (with deduplication).
|
||||
|
||||
Args:
|
||||
version: Dataset version to extend
|
||||
new_labeled_path: Path to new labeled.jsonl to merge
|
||||
new_source_entry: Source entry to add to metadata
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
dataset_dir = datasets_dir / f"ds_v{version}"
|
||||
|
||||
if not dataset_dir.exists():
|
||||
return False
|
||||
|
||||
labeled_file = dataset_dir / "labeled.jsonl"
|
||||
new_labeled_file = Path(new_labeled_path)
|
||||
|
||||
if not new_labeled_file.exists():
|
||||
return False
|
||||
|
||||
# Load existing FENs (dedup set) — must load entire file to avoid duplicates
|
||||
existing_fens = set()
|
||||
if labeled_file.exists():
|
||||
with open(labeled_file, 'r') as f:
|
||||
for line in f:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
fen = data.get('fen')
|
||||
if fen:
|
||||
existing_fens.add(fen)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Merge new positions, skipping duplicates
|
||||
new_count = 0
|
||||
new_lines = []
|
||||
with open(new_labeled_file, 'r') as f_new:
|
||||
for line in f_new:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
fen = data.get('fen')
|
||||
if fen and fen not in existing_fens:
|
||||
new_lines.append(line)
|
||||
existing_fens.add(fen)
|
||||
new_count += 1
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Append only the new, unique positions
|
||||
if new_lines:
|
||||
with open(labeled_file, 'a') as f_append:
|
||||
for line in new_lines:
|
||||
f_append.write(line)
|
||||
|
||||
# Update metadata
|
||||
metadata = load_dataset_metadata(version)
|
||||
if metadata:
|
||||
# Count total positions
|
||||
total_positions = 0
|
||||
with open(labeled_file, 'r') as f:
|
||||
total_positions = sum(1 for _ in f)
|
||||
|
||||
metadata['total_positions'] = total_positions
|
||||
# Update the source entry with actual count of new positions added
|
||||
new_source_entry['actual_count'] = new_count
|
||||
metadata['sources'].append(new_source_entry)
|
||||
save_dataset_metadata(version, metadata)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def get_dataset_labeled_path(version: int) -> Optional[Path]:
|
||||
"""Get the path to a dataset's labeled.jsonl file.
|
||||
|
||||
Returns:
|
||||
Path to labeled.jsonl or None if dataset doesn't exist.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
labeled_file = datasets_dir / f"ds_v{version}" / "labeled.jsonl"
|
||||
|
||||
if labeled_file.exists():
|
||||
return labeled_file
|
||||
return None
|
||||
|
||||
|
||||
def delete_dataset(version: int) -> bool:
|
||||
"""Delete a dataset (recursively removes directory).
|
||||
|
||||
Args:
|
||||
version: Dataset version to delete
|
||||
|
||||
Returns:
|
||||
True if successful.
|
||||
"""
|
||||
datasets_dir = get_datasets_dir()
|
||||
dataset_dir = datasets_dir / f"ds_v{version}"
|
||||
|
||||
if not dataset_dir.exists():
|
||||
return False
|
||||
|
||||
import shutil
|
||||
shutil.rmtree(dataset_dir)
|
||||
return True
|
||||
|
||||
|
||||
def show_datasets_table(console: Console = None) -> None:
|
||||
"""Display all datasets in a Rich table."""
|
||||
if console is None:
|
||||
console = Console()
|
||||
|
||||
datasets = list_datasets()
|
||||
|
||||
if not datasets:
|
||||
console.print("[yellow]ℹ No datasets found yet[/yellow]")
|
||||
return
|
||||
|
||||
table = Table(title="Available Datasets", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Version", style="dim")
|
||||
table.add_column("Positions", justify="right")
|
||||
table.add_column("Sources", justify="left")
|
||||
table.add_column("Depth", justify="center")
|
||||
table.add_column("Created", justify="left")
|
||||
|
||||
for v, metadata in datasets:
|
||||
positions = metadata.get('total_positions', 0)
|
||||
sources = metadata.get('sources', [])
|
||||
source_str = ", ".join([s.get('type', '?') for s in sources])
|
||||
depth = metadata.get('stockfish_depth', '?')
|
||||
created = metadata.get('created', '?')
|
||||
if created != '?':
|
||||
created = created.split('T')[0] # Just the date
|
||||
|
||||
table.add_row(f"v{v}", f"{positions:,}", source_str, str(depth), created)
|
||||
|
||||
console.print(table)
|
||||
@@ -0,0 +1,137 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Export NNUE weights to .nbai format for runtime loading."""
|
||||
|
||||
import json
|
||||
import struct
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import torch
|
||||
|
||||
MAGIC = 0x4942_414E # bytes 'N','B','A','I' as little-endian int32
|
||||
VERSION = 1
|
||||
|
||||
|
||||
def _read_sidecar(weights_file: str) -> dict:
|
||||
sidecar = weights_file.replace(".pt", "_metadata.json")
|
||||
if Path(sidecar).exists():
|
||||
with open(sidecar) as f:
|
||||
return json.load(f)
|
||||
return {}
|
||||
|
||||
|
||||
def _infer_layers(state_dict: dict) -> list[dict]:
|
||||
"""Derive layer descriptors from state_dict weight shapes.
|
||||
|
||||
Assumes layers named l1, l2, ..., lN.
|
||||
All hidden layers get activation 'relu'; the last gets 'linear'.
|
||||
"""
|
||||
names = sorted(
|
||||
{k.split(".")[0] for k in state_dict if k.endswith(".weight")},
|
||||
key=lambda n: int(n[1:]),
|
||||
)
|
||||
layers = []
|
||||
for i, name in enumerate(names):
|
||||
out_size, in_size = state_dict[f"{name}.weight"].shape
|
||||
activation = "linear" if i == len(names) - 1 else "relu"
|
||||
layers.append({"activation": activation, "inputSize": int(in_size), "outputSize": int(out_size)})
|
||||
return layers
|
||||
|
||||
|
||||
def _write_floats(f, tensor):
|
||||
data = tensor.float().flatten().cpu().numpy()
|
||||
f.write(struct.pack("<I", len(data)))
|
||||
f.write(struct.pack(f"<{len(data)}f", *data))
|
||||
|
||||
|
||||
def export_to_nbai(
|
||||
weights_file: str,
|
||||
output_file: str,
|
||||
trained_by: str = "unknown",
|
||||
train_loss: float = 0.0,
|
||||
):
|
||||
if not Path(weights_file).exists():
|
||||
print(f"Error: weights file not found at {weights_file}")
|
||||
sys.exit(1)
|
||||
|
||||
loaded = torch.load(weights_file, map_location="cpu")
|
||||
state_dict = (
|
||||
loaded["model_state_dict"]
|
||||
if isinstance(loaded, dict) and "model_state_dict" in loaded
|
||||
else loaded
|
||||
)
|
||||
|
||||
sidecar = _read_sidecar(weights_file)
|
||||
val_loss = float(loaded.get("best_val_loss", sidecar.get("final_val_loss", 0.0))) if isinstance(loaded, dict) else 0.0
|
||||
trained_at = sidecar.get("date", datetime.now().isoformat())
|
||||
training_data_count = int(sidecar.get("num_positions", 0))
|
||||
|
||||
metadata = {
|
||||
"trainedBy": trained_by,
|
||||
"trainedAt": trained_at,
|
||||
"trainingDataCount": training_data_count,
|
||||
"valLoss": val_loss,
|
||||
"trainLoss": train_loss,
|
||||
}
|
||||
|
||||
layers = _infer_layers(state_dict)
|
||||
layer_names = sorted(
|
||||
{k.split(".")[0] for k in state_dict if k.endswith(".weight")},
|
||||
key=lambda n: int(n[1:]),
|
||||
)
|
||||
|
||||
print(f"Architecture ({len(layers)} layers):")
|
||||
for i, l in enumerate(layers):
|
||||
print(f" l{i + 1}: {l['inputSize']} -> {l['outputSize']} [{l['activation']}]")
|
||||
|
||||
Path(output_file).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(output_file, "wb") as f:
|
||||
# Header
|
||||
f.write(struct.pack("<I", MAGIC))
|
||||
f.write(struct.pack("<H", VERSION))
|
||||
|
||||
# Metadata (length-prefixed UTF-8 JSON)
|
||||
meta_bytes = json.dumps(metadata, indent=2).encode("utf-8")
|
||||
f.write(struct.pack("<I", len(meta_bytes)))
|
||||
f.write(meta_bytes)
|
||||
|
||||
# Layer descriptors
|
||||
f.write(struct.pack("<H", len(layers)))
|
||||
for layer in layers:
|
||||
name_bytes = layer["activation"].encode("ascii")
|
||||
f.write(struct.pack("<B", len(name_bytes)))
|
||||
f.write(name_bytes)
|
||||
f.write(struct.pack("<I", layer["inputSize"]))
|
||||
f.write(struct.pack("<I", layer["outputSize"]))
|
||||
|
||||
# Weights: weight tensor then bias tensor per layer
|
||||
for name in layer_names:
|
||||
w = state_dict[f"{name}.weight"]
|
||||
b = state_dict[f"{name}.bias"]
|
||||
_write_floats(f, w)
|
||||
_write_floats(f, b)
|
||||
print(f" Wrote {name}: weight {tuple(w.shape)}, bias {tuple(b.shape)}")
|
||||
|
||||
size_mb = Path(output_file).stat().st_size / (1024 ** 2)
|
||||
print(f"\nExported to {output_file} ({size_mb:.2f} MB)")
|
||||
print(f"Metadata: {json.dumps(metadata, indent=2)}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
weights_file = "nnue_weights.pt"
|
||||
output_file = "../src/main/resources/nnue_weights.nbai"
|
||||
trained_by = "unknown"
|
||||
train_loss = 0.0
|
||||
|
||||
if len(sys.argv) > 1:
|
||||
weights_file = sys.argv[1]
|
||||
if len(sys.argv) > 2:
|
||||
output_file = sys.argv[2]
|
||||
if len(sys.argv) > 3:
|
||||
trained_by = sys.argv[3]
|
||||
if len(sys.argv) > 4:
|
||||
train_loss = float(sys.argv[4])
|
||||
|
||||
export_to_nbai(weights_file, output_file, trained_by, train_loss)
|
||||
@@ -0,0 +1,171 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate random chess positions for NNUE training with multiprocessing."""
|
||||
|
||||
import chess
|
||||
import random
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from tqdm import tqdm
|
||||
from multiprocessing import Pool, Queue
|
||||
from datetime import datetime
|
||||
import time
|
||||
|
||||
def _worker_generate_games(worker_id, games_per_worker, samples_per_game, min_move, max_move):
|
||||
"""Generate games for one worker.
|
||||
|
||||
Returns:
|
||||
list of FENs generated by this worker
|
||||
"""
|
||||
positions = []
|
||||
|
||||
for game_num in range(games_per_worker):
|
||||
board = chess.Board()
|
||||
move_history = []
|
||||
|
||||
# Play a complete random game
|
||||
while not board.is_game_over() and len(move_history) < 200:
|
||||
legal_moves = list(board.legal_moves)
|
||||
if not legal_moves:
|
||||
break
|
||||
move = random.choice(legal_moves)
|
||||
board.push(move)
|
||||
move_history.append(board.copy())
|
||||
|
||||
# Determine the range of moves to sample from
|
||||
game_length = len(move_history)
|
||||
valid_start = max(min_move, 0)
|
||||
valid_end = min(max_move, game_length)
|
||||
|
||||
if valid_start >= valid_end:
|
||||
continue
|
||||
|
||||
# Randomly sample positions from this game
|
||||
sample_count = min(samples_per_game, valid_end - valid_start)
|
||||
if sample_count > 0:
|
||||
sample_indices = random.sample(
|
||||
range(valid_start, valid_end),
|
||||
k=sample_count
|
||||
)
|
||||
|
||||
for idx in sample_indices:
|
||||
sampled_board = move_history[idx]
|
||||
|
||||
# Only filter truly invalid or terminal positions
|
||||
if not sampled_board.is_valid() or sampled_board.is_game_over():
|
||||
continue
|
||||
|
||||
# Save position (include check, captures, all positions)
|
||||
fen = sampled_board.fen()
|
||||
positions.append(fen)
|
||||
|
||||
return positions
|
||||
|
||||
|
||||
def play_random_game_and_collect_positions(
|
||||
output_file,
|
||||
total_positions=3000000,
|
||||
samples_per_game=1,
|
||||
min_move=1,
|
||||
max_move=50,
|
||||
num_workers=8
|
||||
):
|
||||
"""Generate positions using multiprocessing with multiple workers.
|
||||
|
||||
Args:
|
||||
output_file: Output file for positions
|
||||
total_positions: Target number of positions to generate
|
||||
samples_per_game: Number of positions to sample per game (1-N)
|
||||
min_move: Minimum move number to start sampling from
|
||||
max_move: Maximum move number for sampling
|
||||
num_workers: Number of parallel worker processes
|
||||
|
||||
Returns:
|
||||
Number of valid positions saved
|
||||
"""
|
||||
# Estimate games needed (roughly 1 position per game on average)
|
||||
total_games = max(total_positions // samples_per_game, num_workers)
|
||||
games_per_worker = total_games // num_workers
|
||||
|
||||
print(f"Generating {total_positions:,} positions using {num_workers} workers")
|
||||
print(f"Total games: ~{total_games:,} ({games_per_worker:,} per worker)")
|
||||
print()
|
||||
|
||||
start_time = datetime.now()
|
||||
|
||||
# Generate positions in parallel
|
||||
worker_tasks = [
|
||||
(i, games_per_worker, samples_per_game, min_move, max_move)
|
||||
for i in range(num_workers)
|
||||
]
|
||||
|
||||
positions_count = 0
|
||||
all_positions = []
|
||||
|
||||
with Pool(num_workers) as pool:
|
||||
with tqdm(total=num_workers, desc="Workers generating games") as pbar:
|
||||
for positions in pool.starmap(_worker_generate_games, worker_tasks):
|
||||
all_positions.extend(positions)
|
||||
positions_count += len(positions)
|
||||
pbar.update(1)
|
||||
|
||||
# Write all positions to file
|
||||
print(f"Writing {positions_count:,} positions to {output_file}...")
|
||||
with open(output_file, 'w') as f:
|
||||
for fen in all_positions:
|
||||
f.write(fen + '\n')
|
||||
|
||||
elapsed_time = datetime.now() - start_time
|
||||
elapsed_seconds = elapsed_time.total_seconds()
|
||||
positions_per_second = positions_count / elapsed_seconds if elapsed_seconds > 0 else 0
|
||||
|
||||
# Print summary
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("POSITION GENERATION SUMMARY")
|
||||
print("=" * 60)
|
||||
print(f"Target positions: {total_positions:,}")
|
||||
print(f"Actual positions saved: {positions_count:,}")
|
||||
print(f"Workers: {num_workers}")
|
||||
print(f"Games per worker: {games_per_worker:,}")
|
||||
print(f"Samples per game: {samples_per_game}")
|
||||
print(f"Move range: {min_move}-{max_move}")
|
||||
print(f"Elapsed time: {elapsed_time}")
|
||||
print(f"Throughput: {positions_per_second:.0f} positions/second")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
if positions_count == 0:
|
||||
print("WARNING: No valid positions were generated!")
|
||||
return 0
|
||||
|
||||
return positions_count
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Generate random chess positions for NNUE training")
|
||||
parser.add_argument("output_file", nargs="?", default="positions.txt",
|
||||
help="Output file for positions (default: positions.txt)")
|
||||
parser.add_argument("--positions", type=int, default=3000000,
|
||||
help="Target number of positions to generate (default: 3000000)")
|
||||
parser.add_argument("--samples-per-game", type=int, default=1,
|
||||
help="Number of positions to sample per game (default: 1)")
|
||||
parser.add_argument("--min-move", type=int, default=1,
|
||||
help="Minimum move number to sample from (default: 1)")
|
||||
parser.add_argument("--max-move", type=int, default=50,
|
||||
help="Maximum move number to sample from (default: 50)")
|
||||
parser.add_argument("--workers", type=int, default=8,
|
||||
help="Number of parallel worker processes (default: 8)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
count = play_random_game_and_collect_positions(
|
||||
output_file=args.output_file,
|
||||
total_positions=args.positions,
|
||||
samples_per_game=args.samples_per_game,
|
||||
min_move=args.min_move,
|
||||
max_move=args.max_move,
|
||||
num_workers=args.workers
|
||||
)
|
||||
|
||||
sys.exit(0 if count > 0 else 1)
|
||||
@@ -0,0 +1,326 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Label positions with Stockfish evaluations and analyze distribution."""
|
||||
|
||||
import json
|
||||
import chess.engine
|
||||
import sys
|
||||
import os
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
from tqdm import tqdm
|
||||
from multiprocessing import Pool
|
||||
from functools import partial
|
||||
|
||||
def normalize_evaluation(cp_value, method='tanh', scale=300.0):
|
||||
"""Normalize centipawn evaluation to a bounded range.
|
||||
|
||||
Args:
|
||||
cp_value: Centipawn evaluation from Stockfish
|
||||
method: 'tanh' (default) or 'sigmoid'
|
||||
scale: Scale factor (tanh: 300 is typical)
|
||||
|
||||
Returns:
|
||||
Normalized value in approximately [-1, 1] (tanh) or [0, 1] (sigmoid)
|
||||
"""
|
||||
if method == 'tanh':
|
||||
return np.tanh(cp_value / scale)
|
||||
elif method == 'sigmoid':
|
||||
return 1.0 / (1.0 + np.exp(-cp_value / scale))
|
||||
else:
|
||||
return cp_value / 100.0
|
||||
|
||||
def _evaluate_fen_batch(args):
|
||||
"""Worker function to evaluate a batch of FENs with Stockfish threading.
|
||||
|
||||
Args:
|
||||
args: tuple of (fens, stockfish_path, depth, normalize)
|
||||
|
||||
Returns:
|
||||
list of (fen, eval_normalized, eval_raw) tuples
|
||||
"""
|
||||
fens, stockfish_path, depth, normalize = args
|
||||
|
||||
results = []
|
||||
|
||||
try:
|
||||
engine = chess.engine.SimpleEngine.popen_uci(stockfish_path)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
try:
|
||||
for fen in fens:
|
||||
try:
|
||||
board = chess.Board(fen)
|
||||
if not board.is_valid():
|
||||
continue
|
||||
|
||||
info = engine.analyse(board, chess.engine.Limit(depth=depth))
|
||||
|
||||
if info.get('score') is None:
|
||||
continue
|
||||
|
||||
score = info['score'].white()
|
||||
|
||||
if score.is_mate():
|
||||
eval_cp = 2000 if score.mate() > 0 else -2000
|
||||
else:
|
||||
eval_cp = score.cp
|
||||
|
||||
eval_cp = max(-2000, min(2000, eval_cp))
|
||||
eval_normalized = normalize_evaluation(eval_cp) if normalize else eval_cp
|
||||
|
||||
results.append((fen, eval_normalized, eval_cp))
|
||||
|
||||
except Exception:
|
||||
continue
|
||||
finally:
|
||||
engine.quit()
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def label_positions_with_stockfish(positions_file, output_file, stockfish_path, batch_size=1000, depth=12, verbose=False, normalize=True, num_workers=1):
|
||||
"""Read positions and label them with Stockfish evaluations.
|
||||
|
||||
Args:
|
||||
positions_file: Path to positions.txt
|
||||
output_file: Path to training_data.jsonl
|
||||
stockfish_path: Path to stockfish binary
|
||||
batch_size: Batch size for processing (positions per worker task, default: 1000)
|
||||
depth: Stockfish depth
|
||||
verbose: Print detailed error messages
|
||||
normalize: If True, normalize evals using tanh
|
||||
num_workers: Number of parallel Stockfish processes
|
||||
"""
|
||||
|
||||
# Check if stockfish exists
|
||||
if not Path(stockfish_path).exists():
|
||||
print(f"Error: Stockfish not found at {stockfish_path}")
|
||||
print(f"Tried: {stockfish_path}")
|
||||
print(f"Set STOCKFISH_PATH environment variable or pass as argument")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Using Stockfish: {stockfish_path}")
|
||||
print(f"Number of workers: {num_workers}")
|
||||
|
||||
# Check if positions file exists
|
||||
if not Path(positions_file).exists():
|
||||
print(f"Error: Positions file not found at {positions_file}")
|
||||
sys.exit(1)
|
||||
|
||||
# Load existing evaluations if resuming
|
||||
evaluated_fens = set()
|
||||
position_count = 0
|
||||
|
||||
if Path(output_file).exists():
|
||||
with open(output_file, 'r') as f:
|
||||
for line in f:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
evaluated_fens.add(data['fen'])
|
||||
position_count += 1
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
print(f"Resuming from {position_count} already evaluated positions")
|
||||
|
||||
# Load all FENs that need evaluation
|
||||
fens_to_evaluate = []
|
||||
fens_seen_in_batch = set() # Track duplicates within current batch
|
||||
skipped_invalid = 0
|
||||
skipped_duplicate = 0
|
||||
|
||||
with open(positions_file, 'r') as f:
|
||||
for fen in f:
|
||||
fen = fen.strip()
|
||||
|
||||
if not fen:
|
||||
skipped_invalid += 1
|
||||
continue
|
||||
|
||||
if fen in evaluated_fens:
|
||||
skipped_duplicate += 1
|
||||
continue
|
||||
|
||||
if fen in fens_seen_in_batch:
|
||||
skipped_duplicate += 1
|
||||
continue
|
||||
|
||||
fens_to_evaluate.append(fen)
|
||||
fens_seen_in_batch.add(fen)
|
||||
|
||||
total_to_evaluate = len(fens_to_evaluate)
|
||||
total_lines = position_count + skipped_duplicate + skipped_invalid + total_to_evaluate
|
||||
|
||||
if total_to_evaluate == 0:
|
||||
if position_count == 0:
|
||||
print(f"Error: No valid positions to evaluate in {positions_file}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print(f"All positions already evaluated. No new positions to process.")
|
||||
return True
|
||||
|
||||
print(f"Total positions to process: {total_lines}")
|
||||
print(f"New positions to evaluate: {total_to_evaluate}")
|
||||
print(f"Using depth: {depth}")
|
||||
print()
|
||||
|
||||
# Split FENs into batches for workers
|
||||
batches = []
|
||||
for i in range(0, total_to_evaluate, batch_size):
|
||||
batch = fens_to_evaluate[i:i+batch_size]
|
||||
batches.append((batch, stockfish_path, depth, normalize))
|
||||
|
||||
# Process batches in parallel
|
||||
evaluated = 0
|
||||
errors = 0
|
||||
raw_evals = []
|
||||
normalized_evals = []
|
||||
|
||||
import time
|
||||
start_time = time.time()
|
||||
|
||||
with Pool(num_workers) as pool:
|
||||
with tqdm(total=total_lines, initial=position_count, desc="Labeling positions") as pbar:
|
||||
with open(output_file, 'a') as out:
|
||||
for batch_idx, batch_results in enumerate(pool.imap_unordered(_evaluate_fen_batch, batches)):
|
||||
for fen, eval_normalized, eval_cp in batch_results:
|
||||
# Skip if already evaluated in output file during this run
|
||||
if fen in evaluated_fens:
|
||||
continue
|
||||
|
||||
data = {"fen": fen, "eval": eval_normalized, "eval_raw": eval_cp}
|
||||
out.write(json.dumps(data) + '\n')
|
||||
evaluated_fens.add(fen) # Track as evaluated
|
||||
evaluated += 1
|
||||
raw_evals.append(eval_cp)
|
||||
normalized_evals.append(eval_normalized)
|
||||
pbar.update(1)
|
||||
|
||||
# Update progress for any failed evaluations in the batch
|
||||
batch_size_actual = len(batches[0][0]) if batches else batch_size
|
||||
failed = batch_size_actual - len(batch_results)
|
||||
if failed > 0:
|
||||
errors += failed
|
||||
pbar.update(failed)
|
||||
|
||||
# Calculate and show throughput and ETA
|
||||
elapsed = time.time() - start_time
|
||||
throughput = evaluated / elapsed if elapsed > 0 else 0
|
||||
remaining_positions = total_to_evaluate - evaluated
|
||||
eta_seconds = remaining_positions / throughput if throughput > 0 else 0
|
||||
eta_str = f"{int(eta_seconds // 60)}:{int(eta_seconds % 60):02d}"
|
||||
|
||||
if (batch_idx + 1) % max(1, len(batches) // 10) == 0:
|
||||
pbar.set_postfix({
|
||||
'rate': f'{throughput:.0f} pos/s',
|
||||
'eta': eta_str
|
||||
})
|
||||
|
||||
# Print summary and analysis
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("LABELING SUMMARY")
|
||||
print("=" * 60)
|
||||
print(f"Successfully evaluated: {evaluated}")
|
||||
print(f"Skipped (duplicates): {skipped_duplicate}")
|
||||
print(f"Skipped (invalid): {skipped_invalid}")
|
||||
print(f"Errors: {errors}")
|
||||
print(f"Total processed: {evaluated + skipped_duplicate + skipped_invalid + errors}")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
if evaluated == 0:
|
||||
print("WARNING: No positions were successfully evaluated!")
|
||||
print("Check that:")
|
||||
print(" 1. positions.txt is not empty")
|
||||
print(" 2. positions.txt contains valid FENs")
|
||||
print(" 3. Stockfish is installed and working")
|
||||
print(" 4. Stockfish path is correct")
|
||||
return False
|
||||
|
||||
# Print distribution analysis
|
||||
if raw_evals:
|
||||
raw_evals_arr = np.array(raw_evals)
|
||||
norm_evals_arr = np.array(normalized_evals)
|
||||
|
||||
print("=" * 60)
|
||||
print("EVALUATION DISTRIBUTION ANALYSIS")
|
||||
print("=" * 60)
|
||||
print()
|
||||
print("Raw Evaluations (centipawns):")
|
||||
print(f" Min: {raw_evals_arr.min():.1f}")
|
||||
print(f" Max: {raw_evals_arr.max():.1f}")
|
||||
print(f" Mean: {raw_evals_arr.mean():.1f}")
|
||||
print(f" Median: {np.median(raw_evals_arr):.1f}")
|
||||
print(f" Std: {raw_evals_arr.std():.1f}")
|
||||
print()
|
||||
|
||||
print("Normalized Evaluations (tanh):")
|
||||
print(f" Min: {norm_evals_arr.min():.4f}")
|
||||
print(f" Max: {norm_evals_arr.max():.4f}")
|
||||
print(f" Mean: {norm_evals_arr.mean():.4f}")
|
||||
print(f" Median: {np.median(norm_evals_arr):.4f}")
|
||||
print(f" Std: {norm_evals_arr.std():.4f}")
|
||||
print()
|
||||
|
||||
# Distribution buckets
|
||||
print("Raw Evaluation Buckets (counts):")
|
||||
buckets = [
|
||||
(-float('inf'), -500, "< -5.00"),
|
||||
(-500, -300, "[-5.00, -3.00)"),
|
||||
(-300, -100, "[-3.00, -1.00)"),
|
||||
(-100, 0, "[-1.00, 0.00)"),
|
||||
(0, 100, "[0.00, 1.00)"),
|
||||
(100, 300, "[1.00, 3.00)"),
|
||||
(300, 500, "[3.00, 5.00)"),
|
||||
(500, float('inf'), "> 5.00"),
|
||||
]
|
||||
for low, high, label in buckets:
|
||||
count = np.sum((raw_evals_arr > low) & (raw_evals_arr <= high))
|
||||
pct = 100.0 * count / len(raw_evals_arr)
|
||||
print(f" {label}: {count:6d} ({pct:5.1f}%)")
|
||||
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
print(f"✓ Labeling complete. Output saved to {output_file}")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Label chess positions with Stockfish evaluations")
|
||||
parser.add_argument("positions_file", nargs="?", default="positions.txt",
|
||||
help="Input positions file (default: positions.txt)")
|
||||
parser.add_argument("output_file", nargs="?", default="training_data.jsonl",
|
||||
help="Output file (default: training_data.jsonl)")
|
||||
parser.add_argument("stockfish_path", nargs="?", default=None,
|
||||
help="Path to Stockfish binary (default: $STOCKFISH_PATH or 'stockfish')")
|
||||
parser.add_argument("--depth", type=int, default=12,
|
||||
help="Stockfish depth (default: 12)")
|
||||
parser.add_argument("--batch-size", type=int, default=1000,
|
||||
help="Batch size for processing (default: 1000)")
|
||||
parser.add_argument("--no-normalize", action="store_true",
|
||||
help="Disable evaluation normalization (keep raw centipawns)")
|
||||
parser.add_argument("--verbose", action="store_true",
|
||||
help="Print detailed error messages")
|
||||
parser.add_argument("--workers", type=int, default=1,
|
||||
help="Number of parallel Stockfish processes (default: 1)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine Stockfish path
|
||||
stockfish_path = args.stockfish_path or os.environ.get("STOCKFISH_PATH", "stockfish")
|
||||
|
||||
success = label_positions_with_stockfish(
|
||||
positions_file=args.positions_file,
|
||||
output_file=args.output_file,
|
||||
stockfish_path=stockfish_path,
|
||||
batch_size=args.batch_size,
|
||||
depth=args.depth,
|
||||
normalize=not args.no_normalize,
|
||||
verbose=args.verbose,
|
||||
num_workers=args.workers
|
||||
)
|
||||
|
||||
sys.exit(0 if success else 1)
|
||||
@@ -0,0 +1,208 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Import pre-labeled positions from the Lichess evaluation database.
|
||||
|
||||
Source: https://database.lichess.org/#evals
|
||||
Format: lichess_db_eval.jsonl.zst — compressed JSONL, one position per line.
|
||||
|
||||
Each line:
|
||||
{
|
||||
"fen": "<pieces> <turn> <castling> <ep>",
|
||||
"evals": [
|
||||
{
|
||||
"knodes": <int>,
|
||||
"depth": <int>,
|
||||
"pvs": [{"cp": <int>, "line": "..."} | {"mate": <int>, "line": "..."}]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
cp and mate are from White's perspective (positive = White winning), matching
|
||||
the sign convention used by label.py (score.white()) and expected by train.py.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
from tqdm import tqdm
|
||||
|
||||
MATE_CP = 20000
|
||||
SCALE = 300.0
|
||||
|
||||
|
||||
def _best_eval(evals: list) -> dict | None:
|
||||
"""Return the highest-depth evaluation entry, using knodes as tiebreaker."""
|
||||
if not evals:
|
||||
return None
|
||||
return max(evals, key=lambda e: (e.get("depth", 0), e.get("knodes", 0)))
|
||||
|
||||
|
||||
def _cp_from_pv(pv: dict) -> int | None:
|
||||
"""Extract centipawn value from a principal variation entry."""
|
||||
if "cp" in pv:
|
||||
return max(-MATE_CP, min(MATE_CP, pv["cp"]))
|
||||
if "mate" in pv:
|
||||
return MATE_CP if pv["mate"] > 0 else -MATE_CP
|
||||
return None
|
||||
|
||||
|
||||
def _normalize(cp: int) -> float:
|
||||
return float(np.tanh(cp / SCALE))
|
||||
|
||||
|
||||
def import_lichess_evals(
|
||||
input_path: str,
|
||||
output_file: str,
|
||||
max_positions: int | None = None,
|
||||
min_depth: int = 0,
|
||||
verbose: bool = False,
|
||||
) -> int:
|
||||
"""Stream the Lichess eval database and write a labeled.jsonl file.
|
||||
|
||||
Args:
|
||||
input_path: Path to lichess_db_eval.jsonl.zst (or uncompressed .jsonl).
|
||||
output_file: Destination labeled.jsonl (appended — supports resuming).
|
||||
max_positions: Stop after this many new positions (None = no limit).
|
||||
min_depth: Skip positions whose best eval has depth < min_depth.
|
||||
verbose: Print warnings for skipped lines.
|
||||
|
||||
Returns:
|
||||
Number of new positions written.
|
||||
"""
|
||||
import zstandard as zstd
|
||||
|
||||
input_path = Path(input_path)
|
||||
if not input_path.exists():
|
||||
print(f"Error: {input_path} not found")
|
||||
sys.exit(1)
|
||||
|
||||
# Resume: collect already-written FENs so we skip duplicates.
|
||||
seen_fens: set[str] = set()
|
||||
if Path(output_file).exists():
|
||||
with open(output_file, "r") as f:
|
||||
for line in f:
|
||||
try:
|
||||
seen_fens.add(json.loads(line)["fen"])
|
||||
except (json.JSONDecodeError, KeyError):
|
||||
pass
|
||||
if seen_fens:
|
||||
print(f"Resuming — skipping {len(seen_fens):,} already-imported positions")
|
||||
|
||||
written = 0
|
||||
skipped_depth = 0
|
||||
skipped_no_eval = 0
|
||||
skipped_dup = 0
|
||||
|
||||
def iter_lines():
|
||||
"""Yield decoded text lines from either a .zst or plain .jsonl file."""
|
||||
import io
|
||||
if input_path.suffix == ".zst":
|
||||
dctx = zstd.ZstdDecompressor()
|
||||
with open(input_path, "rb") as fh:
|
||||
with dctx.stream_reader(fh) as reader:
|
||||
text_stream = io.TextIOWrapper(reader, encoding="utf-8")
|
||||
yield from text_stream
|
||||
else:
|
||||
with open(input_path, "r", encoding="utf-8") as fh:
|
||||
yield from fh
|
||||
|
||||
try:
|
||||
with open(output_file, "a") as out:
|
||||
with tqdm(desc="Importing Lichess evals", unit=" pos") as pbar:
|
||||
for raw_line in iter_lines():
|
||||
line = raw_line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
try:
|
||||
data = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
if verbose:
|
||||
print("Warning: malformed JSON line skipped")
|
||||
continue
|
||||
|
||||
fen = data.get("fen", "")
|
||||
if not fen:
|
||||
skipped_no_eval += 1
|
||||
continue
|
||||
|
||||
if fen in seen_fens:
|
||||
skipped_dup += 1
|
||||
continue
|
||||
|
||||
best = _best_eval(data.get("evals", []))
|
||||
if best is None:
|
||||
skipped_no_eval += 1
|
||||
continue
|
||||
|
||||
if best.get("depth", 0) < min_depth:
|
||||
skipped_depth += 1
|
||||
continue
|
||||
|
||||
pvs = best.get("pvs", [])
|
||||
if not pvs:
|
||||
skipped_no_eval += 1
|
||||
continue
|
||||
|
||||
cp = _cp_from_pv(pvs[0])
|
||||
if cp is None:
|
||||
skipped_no_eval += 1
|
||||
continue
|
||||
|
||||
record = {
|
||||
"fen": fen,
|
||||
"eval": _normalize(cp),
|
||||
"eval_raw": cp,
|
||||
}
|
||||
out.write(json.dumps(record) + "\n")
|
||||
seen_fens.add(fen)
|
||||
written += 1
|
||||
pbar.update(1)
|
||||
|
||||
if max_positions and written >= max_positions:
|
||||
print(f"\nReached max_positions limit ({max_positions:,})")
|
||||
break
|
||||
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("LICHESS IMPORT SUMMARY")
|
||||
print("=" * 60)
|
||||
print(f"Positions written: {written:,}")
|
||||
print(f"Skipped (dup): {skipped_dup:,}")
|
||||
print(f"Skipped (no eval): {skipped_no_eval:,}")
|
||||
print(f"Skipped (depth<{min_depth}): {skipped_depth:,}")
|
||||
print("=" * 60)
|
||||
print(f"\n✓ Output: {output_file}")
|
||||
|
||||
return written
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Import Lichess pre-labeled positions into labeled.jsonl"
|
||||
)
|
||||
parser.add_argument("input_path",
|
||||
help="Path to lichess_db_eval.jsonl.zst")
|
||||
parser.add_argument("output_file", nargs="?", default="training_data.jsonl",
|
||||
help="Output labeled.jsonl (default: training_data.jsonl)")
|
||||
parser.add_argument("--max-positions", type=int, default=None,
|
||||
help="Stop after N positions (default: no limit)")
|
||||
parser.add_argument("--min-depth", type=int, default=0,
|
||||
help="Minimum eval depth to accept (default: 0)")
|
||||
parser.add_argument("--verbose", action="store_true",
|
||||
help="Print warnings for skipped lines")
|
||||
|
||||
args = parser.parse_args()
|
||||
count = import_lichess_evals(
|
||||
input_path=args.input_path,
|
||||
output_file=args.output_file,
|
||||
max_positions=args.max_positions,
|
||||
min_depth=args.min_depth,
|
||||
verbose=args.verbose,
|
||||
)
|
||||
sys.exit(0 if count > 0 else 1)
|
||||
@@ -0,0 +1,249 @@
|
||||
import chess
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
from typing import Set, Tuple
|
||||
|
||||
try:
|
||||
import zstandard as zstd
|
||||
except ImportError:
|
||||
print("zstandard library not found. Install with: pip install zstandard")
|
||||
sys.exit(1)
|
||||
|
||||
from generate import play_random_game_and_collect_positions
|
||||
|
||||
|
||||
def download_and_extract_puzzle_db(
|
||||
url: str = 'https://database.lichess.org/lichess_db_puzzle.csv.zst',
|
||||
output_dir: str = 'tactical_data'
|
||||
):
|
||||
"""Download and extract the Lichess puzzle database."""
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
csv_file = output_path / 'lichess_db_puzzle.csv'
|
||||
zst_file = output_path / 'lichess_db_puzzle.csv.zst'
|
||||
|
||||
# Download if not already present
|
||||
if not zst_file.exists():
|
||||
print(f"Downloading puzzle database from {url}...")
|
||||
try:
|
||||
urllib.request.urlretrieve(url, zst_file)
|
||||
print(f"Downloaded to {zst_file}")
|
||||
except Exception as e:
|
||||
print(f"Failed to download: {e}")
|
||||
return None
|
||||
|
||||
# Extract if CSV doesn't exist
|
||||
if not csv_file.exists():
|
||||
print(f"Extracting {zst_file}...")
|
||||
try:
|
||||
with open(zst_file, 'rb') as f:
|
||||
dctx = zstd.ZstdDecompressor()
|
||||
with dctx.stream_reader(f) as reader:
|
||||
with open(csv_file, 'wb') as out:
|
||||
out.write(reader.read())
|
||||
print(f"Extracted to {csv_file}")
|
||||
except Exception as e:
|
||||
print(f"Failed to extract: {e}")
|
||||
return None
|
||||
|
||||
return str(csv_file)
|
||||
|
||||
|
||||
def extract_puzzle_positions(
|
||||
puzzle_csv: str,
|
||||
max_puzzles: int = 300_000
|
||||
) -> Set[str]:
|
||||
"""
|
||||
Extract the position BEFORE the blunder from each puzzle.
|
||||
This is exactly the type of position where tactical
|
||||
recognition matters most.
|
||||
|
||||
Returns a set of unique FENs.
|
||||
"""
|
||||
positions = set()
|
||||
|
||||
with open(puzzle_csv) as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
if len(positions) >= max_puzzles:
|
||||
break
|
||||
|
||||
try:
|
||||
board = chess.Board(row['FEN'])
|
||||
|
||||
# The puzzle FEN is AFTER the blunder move
|
||||
# We want the position BEFORE — so it learns
|
||||
# to find the tactic, not just play it
|
||||
moves = row['Moves'].split()
|
||||
|
||||
# Undo one move to get pre-tactic position
|
||||
board.push_uci(moves[0]) # opponent blunder
|
||||
fen = board.fen()
|
||||
|
||||
# Filter for useful tactical themes
|
||||
themes = row.get('Themes', '')
|
||||
useful = any(t in themes for t in [
|
||||
'fork', 'pin', 'skewer', 'discoveredAttack',
|
||||
'mate', 'mateIn2', 'mateIn3', 'hangingPiece',
|
||||
'trappedPiece', 'sacrifice'
|
||||
])
|
||||
|
||||
if useful:
|
||||
positions.add(fen)
|
||||
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return positions
|
||||
|
||||
|
||||
def load_positions_from_file(file_path: str) -> Set[str]:
|
||||
"""Load positions from a text file (one FEN per line)."""
|
||||
positions = set()
|
||||
try:
|
||||
with open(file_path) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line:
|
||||
positions.add(line)
|
||||
print(f"Loaded {len(positions)} positions from {file_path}")
|
||||
return positions
|
||||
except Exception as e:
|
||||
print(f"Failed to load from {file_path}: {e}")
|
||||
return set()
|
||||
|
||||
|
||||
def merge_positions(
|
||||
tactical: Set[str],
|
||||
other: Set[str],
|
||||
output_file: str = 'position.txt'
|
||||
):
|
||||
"""Merge two position sets and write to file."""
|
||||
merged = tactical | other
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
for fen in merged:
|
||||
f.write(fen + '\n')
|
||||
|
||||
overlap = len(tactical & other)
|
||||
print(f"\n{'='*60}")
|
||||
print(f"MERGE SUMMARY")
|
||||
print(f"{'='*60}")
|
||||
print(f"Tactical positions: {len(tactical):,}")
|
||||
print(f"Other positions: {len(other):,}")
|
||||
print(f"Overlap (deduplicated): {overlap:,}")
|
||||
print(f"Total merged positions: {len(merged):,}")
|
||||
print(f"Written to: {output_file}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
|
||||
def extract_tactical_only(
|
||||
puzzle_csv: str,
|
||||
output_file: str,
|
||||
max_puzzles: int = 300_000
|
||||
) -> int:
|
||||
"""Extract tactical positions and save to file (no merge prompts).
|
||||
|
||||
Args:
|
||||
puzzle_csv: Path to Lichess puzzle CSV
|
||||
output_file: Where to save the FEN positions
|
||||
max_puzzles: Maximum puzzles to extract
|
||||
|
||||
Returns:
|
||||
Number of positions extracted
|
||||
"""
|
||||
print("Extracting tactical positions from puzzle database...")
|
||||
tactical_positions = extract_puzzle_positions(puzzle_csv, max_puzzles)
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
for fen in tactical_positions:
|
||||
f.write(fen + '\n')
|
||||
|
||||
return len(tactical_positions)
|
||||
|
||||
|
||||
def interactive_merge_positions(
|
||||
puzzle_csv: str,
|
||||
output_file: str = 'position.txt',
|
||||
max_puzzles: int = 300_000
|
||||
):
|
||||
"""Interactive workflow: extract tactical positions and merge with user selection."""
|
||||
print("\n" + "="*60)
|
||||
print("TACTICAL POSITION EXTRACTOR & MERGER")
|
||||
print("="*60 + "\n")
|
||||
|
||||
# Extract tactical positions
|
||||
print("Extracting tactical positions from puzzle database...")
|
||||
tactical_positions = extract_puzzle_positions(puzzle_csv, max_puzzles)
|
||||
print(f"Extracted {len(tactical_positions):,} unique tactical positions\n")
|
||||
|
||||
# Ask what to merge with
|
||||
print("What would you like to merge with these tactical positions?")
|
||||
print("1. Load from a position file")
|
||||
print("2. Generate random positions")
|
||||
print("3. Skip merging (save tactical only)")
|
||||
|
||||
choice = input("\nEnter choice (1-3): ").strip()
|
||||
|
||||
other_positions = set()
|
||||
|
||||
if choice == '1':
|
||||
file_path = input("Enter path to position file: ").strip()
|
||||
other_positions = load_positions_from_file(file_path)
|
||||
|
||||
elif choice == '2':
|
||||
positions_to_gen = input("How many positions to generate? (default 1000000): ").strip()
|
||||
try:
|
||||
positions_to_gen = int(positions_to_gen) if positions_to_gen else 1000000
|
||||
except ValueError:
|
||||
positions_to_gen = 1000000
|
||||
|
||||
temp_file = 'temp_generated_positions.txt'
|
||||
print(f"\nGenerating {positions_to_gen:,} random positions...")
|
||||
play_random_game_and_collect_positions(
|
||||
output_file=temp_file,
|
||||
total_positions=positions_to_gen,
|
||||
samples_per_game=1,
|
||||
min_move=1,
|
||||
max_move=50,
|
||||
num_workers=8
|
||||
)
|
||||
other_positions = load_positions_from_file(temp_file)
|
||||
|
||||
elif choice == '3':
|
||||
print("Skipping merge, saving tactical positions only...")
|
||||
|
||||
else:
|
||||
print("Invalid choice, saving tactical positions only...")
|
||||
|
||||
merge_positions(tactical_positions, other_positions, output_file)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Extract and merge tactical positions")
|
||||
parser.add_argument("--url", default='https://database.lichess.org/lichess_db_puzzle.csv.zst',
|
||||
help="URL to download puzzle database from")
|
||||
parser.add_argument("--output-dir", default='trainingdata',
|
||||
help="Directory to extract puzzle database to")
|
||||
parser.add_argument("--max-puzzles", type=int, default=300_000,
|
||||
help="Maximum puzzles to extract (default: 300000)")
|
||||
parser.add_argument("--output-file", default='position.txt',
|
||||
help="Output file for merged positions (default: position.txt)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Download and extract
|
||||
csv_path = download_and_extract_puzzle_db(args.url, args.output_dir)
|
||||
|
||||
if csv_path:
|
||||
# Interactive merge
|
||||
interactive_merge_positions(csv_path, args.output_file, args.max_puzzles)
|
||||
else:
|
||||
print("Failed to download/extract puzzle database")
|
||||
sys.exit(1)
|
||||
@@ -0,0 +1,676 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Train NNUE neural network for chess evaluation."""
|
||||
|
||||
import json
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
from torch.utils.data import DataLoader, Dataset
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from tqdm import tqdm
|
||||
import chess
|
||||
from datetime import datetime, timedelta
|
||||
import re
|
||||
import numpy as np
|
||||
|
||||
class NNUEDataset(Dataset):
|
||||
"""Dataset of chess positions with evaluations."""
|
||||
|
||||
def __init__(self, data_file):
|
||||
self.positions = []
|
||||
self.evals = []
|
||||
self.evals_raw = []
|
||||
self.is_normalized = None
|
||||
|
||||
with open(data_file, 'r') as f:
|
||||
for line in f:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
fen = data['fen']
|
||||
eval_val = data['eval']
|
||||
self.positions.append(fen)
|
||||
self.evals.append(eval_val)
|
||||
|
||||
# Check if normalized or raw
|
||||
if self.is_normalized is None:
|
||||
# If eval is in range [-1, 1], assume normalized
|
||||
self.is_normalized = abs(eval_val) <= 1.0
|
||||
|
||||
# Store raw if available
|
||||
if 'eval_raw' in data:
|
||||
self.evals_raw.append(data['eval_raw'])
|
||||
else:
|
||||
self.evals_raw.append(eval_val)
|
||||
except (json.JSONDecodeError, KeyError):
|
||||
pass
|
||||
|
||||
def __len__(self):
|
||||
return len(self.positions)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
fen = self.positions[idx]
|
||||
eval_val = self.evals[idx]
|
||||
features = fen_to_features(fen)
|
||||
|
||||
# Use evaluation as-is if normalized, otherwise apply sigmoid scaling
|
||||
if self.is_normalized:
|
||||
target = torch.tensor(eval_val, dtype=torch.float32)
|
||||
else:
|
||||
target = torch.sigmoid(torch.tensor(eval_val / 400.0, dtype=torch.float32))
|
||||
|
||||
return features, target
|
||||
|
||||
def fen_to_features(fen):
|
||||
"""Convert FEN to 768-dimensional binary feature vector."""
|
||||
# Piece type to index: pawn=0, knight=1, bishop=2, rook=3, queen=4, king=5
|
||||
piece_to_idx = {'p': 0, 'n': 1, 'b': 2, 'r': 3, 'q': 4, 'k': 5,
|
||||
'P': 6, 'N': 7, 'B': 8, 'R': 9, 'Q': 10, 'K': 11}
|
||||
|
||||
features = torch.zeros(768, dtype=torch.float32)
|
||||
|
||||
try:
|
||||
board = chess.Board(fen)
|
||||
|
||||
# 12 piece types × 64 squares = 768
|
||||
for square in chess.SQUARES:
|
||||
piece = board.piece_at(square)
|
||||
if piece is not None:
|
||||
piece_char = piece.symbol()
|
||||
if piece_char in piece_to_idx:
|
||||
piece_idx = piece_to_idx[piece_char]
|
||||
feature_idx = piece_idx * 64 + square
|
||||
features[feature_idx] = 1.0
|
||||
except:
|
||||
pass
|
||||
|
||||
return features
|
||||
|
||||
DEFAULT_HIDDEN_SIZES = [1536, 1024, 512, 256]
|
||||
|
||||
|
||||
class NNUE(nn.Module):
|
||||
"""NNUE neural network with configurable hidden layers.
|
||||
|
||||
Architecture: 768 → hidden_sizes[0] → ... → hidden_sizes[-1] → 1
|
||||
Layer attributes follow the naming l1, l2, ..., lN so export.py can
|
||||
infer the architecture directly from the state_dict.
|
||||
"""
|
||||
|
||||
def __init__(self, hidden_sizes=None, dropout_rate=0.2):
|
||||
super().__init__()
|
||||
if hidden_sizes is None:
|
||||
hidden_sizes = DEFAULT_HIDDEN_SIZES
|
||||
self.hidden_sizes = list(hidden_sizes)
|
||||
sizes = [768] + self.hidden_sizes + [1]
|
||||
num_hidden = len(self.hidden_sizes)
|
||||
|
||||
for i in range(num_hidden):
|
||||
setattr(self, f"l{i + 1}", nn.Linear(sizes[i], sizes[i + 1]))
|
||||
setattr(self, f"relu{i + 1}", nn.ReLU())
|
||||
setattr(self, f"drop{i + 1}", nn.Dropout(dropout_rate))
|
||||
setattr(self, f"l{num_hidden + 1}", nn.Linear(sizes[-2], sizes[-1]))
|
||||
self._num_hidden = num_hidden
|
||||
|
||||
def forward(self, x):
|
||||
for i in range(1, self._num_hidden + 1):
|
||||
layer = getattr(self, f"l{i}")
|
||||
relu = getattr(self, f"relu{i}")
|
||||
drop = getattr(self, f"drop{i}")
|
||||
x = drop(relu(layer(x)))
|
||||
return getattr(self, f"l{self._num_hidden + 1}")(x)
|
||||
|
||||
def find_next_version(base_name="nnue_weights"):
|
||||
"""Find the next version number for model versioning.
|
||||
|
||||
Looks for nnue_weights_v*.pt files and returns the next version number.
|
||||
If no versioned files exist, returns 1.
|
||||
"""
|
||||
base_path = Path(base_name)
|
||||
directory = base_path.parent
|
||||
filename = base_path.name
|
||||
|
||||
pattern = re.compile(rf"{re.escape(filename)}_v(\d+)\.pt")
|
||||
versions = []
|
||||
|
||||
for file in directory.glob(f"{filename}_v*.pt"):
|
||||
match = pattern.match(file.name)
|
||||
if match:
|
||||
versions.append(int(match.group(1)))
|
||||
|
||||
if versions:
|
||||
return max(versions) + 1
|
||||
return 1
|
||||
|
||||
def save_metadata(weights_file, metadata):
|
||||
"""Save training metadata alongside the weights file.
|
||||
|
||||
Args:
|
||||
weights_file: Path to the .pt file (e.g., nnue_weights_v1.pt)
|
||||
metadata: Dictionary with training info
|
||||
"""
|
||||
metadata_file = weights_file.replace(".pt", "_metadata.json")
|
||||
|
||||
with open(metadata_file, "w") as f:
|
||||
json.dump(metadata, f, indent=2, default=str)
|
||||
|
||||
return metadata_file
|
||||
|
||||
def _setup_training(data_file, batch_size, subsample_ratio):
|
||||
"""Set up device, dataset, and data loaders.
|
||||
|
||||
Returns:
|
||||
(device, dataset, train_dataset, val_dataset, train_loader, val_loader, num_positions)
|
||||
"""
|
||||
print("Checking GPU availability...")
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
if torch.cuda.is_available():
|
||||
print(f"✓ GPU available: {torch.cuda.get_device_name(0)}")
|
||||
print(f" GPU memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.2f} GB")
|
||||
else:
|
||||
print("⚠ GPU not available, using CPU")
|
||||
print(f"Using device: {device}")
|
||||
print()
|
||||
|
||||
print("Loading dataset...")
|
||||
dataset = NNUEDataset(data_file)
|
||||
num_positions = len(dataset)
|
||||
print(f"Dataset size: {num_positions}")
|
||||
print(f"Data normalization: {'Yes (tanh)' if dataset.is_normalized else 'No (raw centipawns)'})")
|
||||
|
||||
evals_array = np.array(dataset.evals)
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("TRAINING DATASET DIAGNOSTICS")
|
||||
print("=" * 60)
|
||||
print(f"Min evaluation: {evals_array.min():.4f}")
|
||||
print(f"Max evaluation: {evals_array.max():.4f}")
|
||||
print(f"Mean evaluation: {evals_array.mean():.4f}")
|
||||
print(f"Median evaluation: {np.median(evals_array):.4f}")
|
||||
print(f"Std deviation: {evals_array.std():.4f}")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
train_size = int(0.9 * len(dataset))
|
||||
val_size = len(dataset) - train_size
|
||||
|
||||
from torch.utils.data import random_split, RandomSampler
|
||||
generator = torch.Generator().manual_seed(42)
|
||||
train_dataset, val_dataset = random_split(dataset, [train_size, val_size], generator=generator)
|
||||
|
||||
subsample_size = max(1, int(subsample_ratio * len(train_dataset)))
|
||||
train_sampler = RandomSampler(train_dataset, replacement=False, num_samples=subsample_size)
|
||||
train_loader = DataLoader(
|
||||
train_dataset,
|
||||
batch_size=batch_size,
|
||||
sampler=train_sampler,
|
||||
num_workers=8,
|
||||
pin_memory=True,
|
||||
persistent_workers=True
|
||||
)
|
||||
val_loader = DataLoader(
|
||||
val_dataset,
|
||||
batch_size=batch_size,
|
||||
shuffle=False,
|
||||
num_workers=8,
|
||||
pin_memory=True,
|
||||
persistent_workers=True
|
||||
)
|
||||
|
||||
return device, dataset, train_dataset, val_dataset, train_loader, val_loader, num_positions
|
||||
|
||||
def _run_training_season(
|
||||
model, optimizer, scheduler, scaler,
|
||||
train_loader, val_loader, train_dataset, val_dataset,
|
||||
device, criterion, output_file,
|
||||
start_epoch, epochs, early_stopping_patience,
|
||||
season_start_time, deadline=None, initial_best_val_loss=float('inf')
|
||||
):
|
||||
"""Run one training season until epoch limit, early stopping, or deadline.
|
||||
|
||||
Args:
|
||||
initial_best_val_loss: Baseline to beat — epochs that don't improve on this count
|
||||
toward early stopping and do not save snapshots.
|
||||
Returns:
|
||||
(best_val_loss, best_model_state, last_epoch)
|
||||
best_model_state is None if no epoch beat initial_best_val_loss.
|
||||
"""
|
||||
best_val_loss = initial_best_val_loss
|
||||
best_model_state = None
|
||||
epochs_without_improvement = 0
|
||||
total_epochs = start_epoch + epochs
|
||||
last_epoch = start_epoch - 1
|
||||
|
||||
for epoch in range(start_epoch, start_epoch + epochs):
|
||||
if deadline and datetime.now() >= deadline:
|
||||
print("Time limit reached, stopping season.")
|
||||
break
|
||||
|
||||
epoch_display = epoch + 1
|
||||
|
||||
# Train
|
||||
model.train()
|
||||
train_loss = 0.0
|
||||
with tqdm(total=len(train_loader), desc=f"Epoch {epoch_display}/{total_epochs} - Train") as pbar:
|
||||
for batch_features, batch_targets in train_loader:
|
||||
batch_features = batch_features.to(device)
|
||||
batch_targets = batch_targets.to(device).unsqueeze(1)
|
||||
|
||||
optimizer.zero_grad()
|
||||
|
||||
with torch.amp.autocast('cuda' if torch.cuda.is_available() else 'cpu'):
|
||||
outputs = model(batch_features)
|
||||
loss = criterion(outputs, batch_targets)
|
||||
|
||||
scaler.scale(loss).backward()
|
||||
scaler.step(optimizer)
|
||||
scaler.update()
|
||||
|
||||
train_loss += loss.item() * batch_features.size(0)
|
||||
pbar.update(1)
|
||||
|
||||
train_loss /= len(train_dataset)
|
||||
|
||||
# Validation
|
||||
model.eval()
|
||||
val_loss = 0.0
|
||||
with torch.no_grad():
|
||||
with tqdm(total=len(val_loader), desc=f"Epoch {epoch_display}/{total_epochs} - Val") as pbar:
|
||||
for batch_features, batch_targets in val_loader:
|
||||
batch_features = batch_features.to(device)
|
||||
batch_targets = batch_targets.to(device).unsqueeze(1)
|
||||
|
||||
with torch.amp.autocast('cuda' if torch.cuda.is_available() else 'cpu'):
|
||||
outputs = model(batch_features)
|
||||
loss = criterion(outputs, batch_targets)
|
||||
val_loss += loss.item() * batch_features.size(0)
|
||||
pbar.update(1)
|
||||
|
||||
val_loss /= len(val_dataset)
|
||||
|
||||
scheduler.step()
|
||||
|
||||
if torch.cuda.is_available():
|
||||
gpu_mem_used = torch.cuda.memory_allocated(device) / 1e9
|
||||
gpu_mem_reserved = torch.cuda.memory_reserved(device) / 1e9
|
||||
print(f"GPU Memory: {gpu_mem_used:.2f}GB used, {gpu_mem_reserved:.2f}GB reserved")
|
||||
|
||||
elapsed_time = datetime.now() - season_start_time
|
||||
time_per_epoch = elapsed_time.total_seconds() / (epoch + 1)
|
||||
remaining_epochs = total_epochs - epoch_display
|
||||
eta_seconds = time_per_epoch * remaining_epochs
|
||||
eta_str = str(datetime.fromtimestamp(eta_seconds) - datetime.fromtimestamp(0)).split('.')[0]
|
||||
print(f"Epoch {epoch_display}: Train Loss = {train_loss:.6f}, Val Loss = {val_loss:.6f} | ETA: {eta_str}")
|
||||
|
||||
checkpoint_file = output_file.replace(".pt", "_checkpoint.pt")
|
||||
torch.save({
|
||||
"epoch": epoch,
|
||||
"model_state_dict": model.state_dict(),
|
||||
"optimizer_state_dict": optimizer.state_dict(),
|
||||
"scheduler_state_dict": scheduler.state_dict(),
|
||||
"scaler_state_dict": scaler.state_dict(),
|
||||
"best_val_loss": best_val_loss,
|
||||
"hidden_sizes": model.hidden_sizes,
|
||||
}, checkpoint_file)
|
||||
|
||||
if val_loss < best_val_loss:
|
||||
best_val_loss = val_loss
|
||||
best_model_state = model.state_dict().copy()
|
||||
epochs_without_improvement = 0
|
||||
snapshot_file = output_file.replace(".pt", "_best_snapshot.pt")
|
||||
torch.save(best_model_state, snapshot_file)
|
||||
print(f" Best model snapshot saved: {snapshot_file} (val_loss: {val_loss:.6f})")
|
||||
else:
|
||||
epochs_without_improvement += 1
|
||||
|
||||
last_epoch = epoch
|
||||
|
||||
if early_stopping_patience and epochs_without_improvement >= early_stopping_patience:
|
||||
print(f"Early stopping: no improvement for {early_stopping_patience} epochs")
|
||||
break
|
||||
|
||||
return best_val_loss, best_model_state, last_epoch
|
||||
|
||||
def _save_versioned_model(best_model_state, optimizer, scheduler, scaler, last_epoch,
|
||||
best_val_loss, output_file, use_versioning, num_positions,
|
||||
stockfish_depth, training_start_time, hidden_sizes=None,
|
||||
extra_metadata=None):
|
||||
"""Save the best model with optional versioning and metadata."""
|
||||
final_output_file = output_file
|
||||
metadata = {}
|
||||
architecture = [768] + list(hidden_sizes or DEFAULT_HIDDEN_SIZES) + [1]
|
||||
|
||||
if use_versioning:
|
||||
base_name = output_file.replace(".pt", "")
|
||||
version = find_next_version(base_name)
|
||||
final_output_file = f"{base_name}_v{version}.pt"
|
||||
|
||||
metadata = {
|
||||
"version": version,
|
||||
"date": training_start_time.isoformat(),
|
||||
"num_positions": num_positions,
|
||||
"stockfish_depth": stockfish_depth,
|
||||
"final_val_loss": float(best_val_loss),
|
||||
"architecture": architecture,
|
||||
"device": str(torch.device("cuda" if torch.cuda.is_available() else "cpu")),
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
if extra_metadata:
|
||||
metadata.update(extra_metadata)
|
||||
|
||||
torch.save({
|
||||
"model_state_dict": best_model_state,
|
||||
"optimizer_state_dict": optimizer.state_dict(),
|
||||
"scheduler_state_dict": scheduler.state_dict(),
|
||||
"scaler_state_dict": scaler.state_dict(),
|
||||
"epoch": last_epoch,
|
||||
"best_val_loss": best_val_loss,
|
||||
"hidden_sizes": list(hidden_sizes or DEFAULT_HIDDEN_SIZES),
|
||||
}, final_output_file)
|
||||
print(f"Best model saved to {final_output_file}")
|
||||
|
||||
if use_versioning and metadata:
|
||||
metadata_file = save_metadata(final_output_file, metadata)
|
||||
print(f"Metadata saved to {metadata_file}")
|
||||
print(f"\nTraining Summary:")
|
||||
for key, val in metadata.items():
|
||||
print(f" {key}: {val}")
|
||||
|
||||
def train_nnue(data_file, output_file="nnue_weights.pt", epochs=100, batch_size=16384, lr=0.001, checkpoint=None, stockfish_depth=12, use_versioning=True, early_stopping_patience=None, weight_decay=1e-4, subsample_ratio=1.0, hidden_sizes=None):
|
||||
"""Train the NNUE model with GPU optimizations and automatic mixed precision.
|
||||
|
||||
Args:
|
||||
data_file: Path to training_data.jsonl
|
||||
output_file: Where to save best weights (or base name if use_versioning=True)
|
||||
epochs: Number of training epochs (default: 100)
|
||||
batch_size: Training batch size (default: 16384)
|
||||
lr: Learning rate (default: 0.001)
|
||||
checkpoint: Optional path to checkpoint file to resume from
|
||||
stockfish_depth: Depth used in Stockfish evaluation (for metadata)
|
||||
use_versioning: If True, save as nnue_weights_v{N}.pt with metadata
|
||||
early_stopping_patience: Stop if val loss doesn't improve for N epochs (None to disable)
|
||||
weight_decay: L2 regularization strength (default: 1e-4, helps prevent overfitting)
|
||||
subsample_ratio: Fraction of training data to sample per epoch (default: 1.0 = all data)
|
||||
hidden_sizes: Hidden layer sizes (default: [1536, 1024, 512, 256])
|
||||
"""
|
||||
device, dataset, train_dataset, val_dataset, train_loader, val_loader, num_positions = \
|
||||
_setup_training(data_file, batch_size, subsample_ratio)
|
||||
|
||||
start_epoch = 0
|
||||
best_val_loss = float('inf')
|
||||
resolved_hidden_sizes = list(hidden_sizes or DEFAULT_HIDDEN_SIZES)
|
||||
|
||||
if checkpoint:
|
||||
print(f"Loading checkpoint: {checkpoint}")
|
||||
ckpt = torch.load(checkpoint, map_location=device)
|
||||
if isinstance(ckpt, dict) and "model_state_dict" in ckpt:
|
||||
ckpt_hidden = ckpt.get("hidden_sizes")
|
||||
if ckpt_hidden and ckpt_hidden != resolved_hidden_sizes:
|
||||
print(f" Using architecture from checkpoint: {ckpt_hidden}")
|
||||
resolved_hidden_sizes = ckpt_hidden
|
||||
|
||||
model = NNUE(hidden_sizes=resolved_hidden_sizes).to(device)
|
||||
criterion = nn.MSELoss()
|
||||
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
|
||||
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=epochs)
|
||||
scaler = torch.amp.GradScaler('cuda') if torch.cuda.is_available() else torch.amp.GradScaler('cpu')
|
||||
|
||||
if checkpoint:
|
||||
ckpt = torch.load(checkpoint, map_location=device)
|
||||
if isinstance(ckpt, dict) and "model_state_dict" in ckpt:
|
||||
model.load_state_dict(ckpt["model_state_dict"])
|
||||
optimizer.load_state_dict(ckpt["optimizer_state_dict"])
|
||||
scheduler.load_state_dict(ckpt["scheduler_state_dict"])
|
||||
scaler.load_state_dict(ckpt["scaler_state_dict"])
|
||||
start_epoch = ckpt["epoch"] + 1
|
||||
best_val_loss = ckpt.get("best_val_loss", float('inf'))
|
||||
print(f"Resumed from epoch {start_epoch} (best val loss so far: {best_val_loss:.6f})")
|
||||
else:
|
||||
model.load_state_dict(ckpt)
|
||||
print("Loaded weights-only checkpoint (no optimizer state)")
|
||||
|
||||
checkpoint_val_loss = best_val_loss if checkpoint else float('inf')
|
||||
|
||||
subsample_size = max(1, int(subsample_ratio * len(train_dataset)))
|
||||
arch_str = " → ".join(str(s) for s in [768] + resolved_hidden_sizes + [1])
|
||||
print(f"Architecture: {arch_str}")
|
||||
print(f"Training for {epochs} epochs with batch_size={batch_size}, lr={lr}...")
|
||||
print(f"Learning rate scheduler: Cosine annealing (T_max={epochs})")
|
||||
print(f"Mixed precision training: enabled")
|
||||
print(f"Regularization: Dropout (20%) + L2 weight decay ({weight_decay})")
|
||||
if subsample_ratio < 1.0:
|
||||
print(f"Stochastic sampling: {subsample_ratio:.0%} of train set per epoch ({subsample_size:,} positions)")
|
||||
if early_stopping_patience:
|
||||
print(f"Early stopping enabled (patience: {early_stopping_patience} epochs)")
|
||||
print()
|
||||
|
||||
training_start_time = datetime.now()
|
||||
|
||||
best_val_loss, best_model_state, last_epoch = _run_training_season(
|
||||
model, optimizer, scheduler, scaler,
|
||||
train_loader, val_loader, train_dataset, val_dataset,
|
||||
device, criterion, output_file,
|
||||
start_epoch, epochs, early_stopping_patience,
|
||||
training_start_time
|
||||
)
|
||||
|
||||
if best_model_state is None or best_val_loss >= checkpoint_val_loss:
|
||||
print(f"\nNo improvement over checkpoint (best: {best_val_loss:.6f} vs checkpoint: {checkpoint_val_loss:.6f})")
|
||||
print("No new model created.")
|
||||
return
|
||||
|
||||
_save_versioned_model(
|
||||
best_model_state, optimizer, scheduler, scaler, last_epoch,
|
||||
best_val_loss, output_file, use_versioning, num_positions,
|
||||
stockfish_depth, training_start_time,
|
||||
hidden_sizes=resolved_hidden_sizes,
|
||||
extra_metadata={"epochs": epochs, "batch_size": batch_size, "learning_rate": lr,
|
||||
"checkpoint": str(checkpoint) if checkpoint else None}
|
||||
)
|
||||
|
||||
def burst_train(data_file, output_file="nnue_weights.pt", duration_minutes=60,
|
||||
epochs_per_season=50, early_stopping_patience=10,
|
||||
batch_size=16384, lr=0.001, initial_checkpoint=None,
|
||||
stockfish_depth=12, use_versioning=True,
|
||||
weight_decay=1e-4, subsample_ratio=1.0, hidden_sizes=None):
|
||||
"""Train in burst mode: repeatedly restart from the best checkpoint until the time budget expires.
|
||||
|
||||
Each season trains with early stopping. When early stopping fires, the model reloads the
|
||||
global best weights and begins a fresh season with a reset optimizer and scheduler.
|
||||
This prevents the model from drifting away from its best known state.
|
||||
|
||||
Args:
|
||||
data_file: Path to training_data.jsonl
|
||||
output_file: Output file base name
|
||||
duration_minutes: Total training budget in minutes
|
||||
epochs_per_season: Max epochs per restart season (default: 50)
|
||||
early_stopping_patience: Patience for early stopping within each season (default: 10)
|
||||
batch_size: Training batch size (default: 16384)
|
||||
lr: Learning rate reset to this value at the start of each season (default: 0.001)
|
||||
initial_checkpoint: Optional weights-only .pt file to start from
|
||||
stockfish_depth: Depth used in Stockfish evaluation (for metadata)
|
||||
use_versioning: If True, save as nnue_weights_v{N}.pt with metadata
|
||||
weight_decay: L2 regularization strength (default: 1e-4)
|
||||
subsample_ratio: Fraction of training data to sample per epoch (default: 1.0)
|
||||
hidden_sizes: Hidden layer sizes (default: [1536, 1024, 512, 256])
|
||||
"""
|
||||
deadline = datetime.now() + timedelta(minutes=duration_minutes)
|
||||
|
||||
device, dataset, train_dataset, val_dataset, train_loader, val_loader, num_positions = \
|
||||
_setup_training(data_file, batch_size, subsample_ratio)
|
||||
|
||||
resolved_hidden_sizes = list(hidden_sizes or DEFAULT_HIDDEN_SIZES)
|
||||
|
||||
if initial_checkpoint:
|
||||
print(f"Loading initial weights: {initial_checkpoint}")
|
||||
ckpt = torch.load(initial_checkpoint, map_location=device)
|
||||
if isinstance(ckpt, dict) and "model_state_dict" in ckpt:
|
||||
ckpt_hidden = ckpt.get("hidden_sizes")
|
||||
if ckpt_hidden and ckpt_hidden != resolved_hidden_sizes:
|
||||
print(f" Using architecture from checkpoint: {ckpt_hidden}")
|
||||
resolved_hidden_sizes = ckpt_hidden
|
||||
|
||||
model = NNUE(hidden_sizes=resolved_hidden_sizes).to(device)
|
||||
criterion = nn.MSELoss()
|
||||
best_global_val_loss = float('inf')
|
||||
|
||||
if initial_checkpoint:
|
||||
ckpt = torch.load(initial_checkpoint, map_location=device)
|
||||
if isinstance(ckpt, dict) and "model_state_dict" in ckpt:
|
||||
model.load_state_dict(ckpt["model_state_dict"])
|
||||
best_global_val_loss = ckpt.get("best_val_loss", float('inf'))
|
||||
if best_global_val_loss < float('inf'):
|
||||
print(f"Resumed from checkpoint (best val loss: {best_global_val_loss:.6f})")
|
||||
else:
|
||||
print("Initial weights loaded (no val loss in checkpoint).")
|
||||
else:
|
||||
model.load_state_dict(ckpt)
|
||||
print("Loaded weights-only checkpoint (no val loss info).")
|
||||
|
||||
arch_str = " → ".join(str(s) for s in [768] + resolved_hidden_sizes + [1])
|
||||
print(f"Architecture: {arch_str}")
|
||||
print(f"Burst training: {duration_minutes}m budget, {epochs_per_season} epochs/season, patience={early_stopping_patience}")
|
||||
print(f"Deadline: {deadline.strftime('%H:%M:%S')}")
|
||||
print()
|
||||
|
||||
burst_start_time = datetime.now()
|
||||
season = 0
|
||||
best_global_state = None
|
||||
last_optimizer = None
|
||||
last_scheduler = None
|
||||
last_scaler = None
|
||||
last_epoch = 0
|
||||
|
||||
while datetime.now() < deadline:
|
||||
season += 1
|
||||
remaining_minutes = (deadline - datetime.now()).total_seconds() / 60
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"BURST SEASON {season} | {remaining_minutes:.1f} minutes remaining")
|
||||
if best_global_val_loss < float('inf'):
|
||||
print(f"Global best val loss so far: {best_global_val_loss:.6f}")
|
||||
print(f"{'=' * 60}\n")
|
||||
|
||||
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
|
||||
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=epochs_per_season)
|
||||
scaler = torch.amp.GradScaler('cuda') if torch.cuda.is_available() else torch.amp.GradScaler('cpu')
|
||||
|
||||
season_start_time = datetime.now()
|
||||
val_loss, model_state, last_epoch = _run_training_season(
|
||||
model, optimizer, scheduler, scaler,
|
||||
train_loader, val_loader, train_dataset, val_dataset,
|
||||
device, criterion, output_file,
|
||||
0, epochs_per_season, early_stopping_patience,
|
||||
season_start_time, deadline=deadline,
|
||||
initial_best_val_loss=best_global_val_loss
|
||||
)
|
||||
|
||||
last_optimizer = optimizer
|
||||
last_scheduler = scheduler
|
||||
last_scaler = scaler
|
||||
|
||||
if model_state is not None and val_loss < best_global_val_loss:
|
||||
best_global_val_loss = val_loss
|
||||
best_global_state = model_state
|
||||
print(f" New global best: {best_global_val_loss:.6f} (season {season})")
|
||||
|
||||
# Reload global best for the next season so we never drift backwards
|
||||
if best_global_state is not None:
|
||||
model.load_state_dict(best_global_state)
|
||||
|
||||
total_minutes = (datetime.now() - burst_start_time).total_seconds() / 60
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Burst training complete: {season} season(s) in {total_minutes:.1f}m")
|
||||
print(f"Best val loss: {best_global_val_loss:.6f}")
|
||||
print(f"{'=' * 60}\n")
|
||||
|
||||
if best_global_state is None:
|
||||
print("No model improvement found. No file saved.")
|
||||
return
|
||||
|
||||
_save_versioned_model(
|
||||
best_global_state, last_optimizer, last_scheduler, last_scaler, last_epoch,
|
||||
best_global_val_loss, output_file, use_versioning, num_positions,
|
||||
stockfish_depth, burst_start_time,
|
||||
hidden_sizes=resolved_hidden_sizes,
|
||||
extra_metadata={
|
||||
"mode": "burst",
|
||||
"duration_minutes": duration_minutes,
|
||||
"epochs_per_season": epochs_per_season,
|
||||
"early_stopping_patience": early_stopping_patience,
|
||||
"seasons_completed": season,
|
||||
"batch_size": batch_size,
|
||||
"learning_rate": lr,
|
||||
"initial_checkpoint": str(initial_checkpoint) if initial_checkpoint else None,
|
||||
}
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Train NNUE neural network for chess evaluation")
|
||||
parser.add_argument("data_file", nargs="?", default="training_data.jsonl",
|
||||
help="Path to training_data.jsonl (default: training_data.jsonl)")
|
||||
parser.add_argument("output_file", nargs="?", default="nnue_weights.pt",
|
||||
help="Output file base name (default: nnue_weights.pt)")
|
||||
parser.add_argument("--checkpoint", type=str, default=None,
|
||||
help="Path to checkpoint file to resume training from (optional)")
|
||||
parser.add_argument("--epochs", type=int, default=100,
|
||||
help="Number of epochs to train (default: 100)")
|
||||
parser.add_argument("--batch-size", type=int, default=16384,
|
||||
help="Batch size (default: 16384)")
|
||||
parser.add_argument("--lr", type=float, default=0.001,
|
||||
help="Learning rate (default: 0.001)")
|
||||
parser.add_argument("--early-stopping", type=int, default=None,
|
||||
help="Stop if val loss doesn't improve for N epochs (optional)")
|
||||
parser.add_argument("--stockfish-depth", type=int, default=12,
|
||||
help="Stockfish depth used for evaluations (for metadata, default: 12)")
|
||||
parser.add_argument("--no-versioning", action="store_true",
|
||||
help="Disable automatic versioning (save directly to output file)")
|
||||
parser.add_argument("--weight-decay", type=float, default=5e-5,
|
||||
help="L2 regularization strength (default: 1e-4, helps prevent overfitting)")
|
||||
parser.add_argument("--subsample-ratio", type=float, default=1.0,
|
||||
help="Fraction of training data to sample per epoch (default: 1.0 = all data)")
|
||||
parser.add_argument("--hidden-layers", type=str, default=None,
|
||||
help="Comma-separated hidden layer sizes (default: 1536,1024,512,256)")
|
||||
|
||||
# Burst mode
|
||||
parser.add_argument("--burst-duration", type=float, default=None,
|
||||
help="Enable burst mode: total training budget in minutes")
|
||||
parser.add_argument("--epochs-per-season", type=int, default=50,
|
||||
help="Max epochs per burst season before restarting (default: 50, burst mode only)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
hidden_sizes = [int(x) for x in args.hidden_layers.split(",")] if args.hidden_layers else None
|
||||
|
||||
if args.burst_duration is not None:
|
||||
burst_train(
|
||||
data_file=args.data_file,
|
||||
output_file=args.output_file,
|
||||
duration_minutes=args.burst_duration,
|
||||
epochs_per_season=args.epochs_per_season,
|
||||
early_stopping_patience=args.early_stopping or 10,
|
||||
batch_size=args.batch_size,
|
||||
lr=args.lr,
|
||||
initial_checkpoint=args.checkpoint,
|
||||
stockfish_depth=args.stockfish_depth,
|
||||
use_versioning=not args.no_versioning,
|
||||
weight_decay=args.weight_decay,
|
||||
subsample_ratio=args.subsample_ratio,
|
||||
hidden_sizes=hidden_sizes,
|
||||
)
|
||||
else:
|
||||
train_nnue(
|
||||
data_file=args.data_file,
|
||||
output_file=args.output_file,
|
||||
epochs=args.epochs,
|
||||
batch_size=args.batch_size,
|
||||
lr=args.lr,
|
||||
checkpoint=args.checkpoint,
|
||||
stockfish_depth=args.stockfish_depth,
|
||||
use_versioning=not args.no_versioning,
|
||||
early_stopping_patience=args.early_stopping,
|
||||
weight_decay=args.weight_decay,
|
||||
subsample_ratio=args.subsample_ratio,
|
||||
hidden_sizes=hidden_sizes,
|
||||
)
|
||||
@@ -0,0 +1,30 @@
|
||||
# Setup and run NNUE training pipeline
|
||||
|
||||
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||
$VenvDir = Join-Path $ScriptDir ".venv"
|
||||
|
||||
# Check if virtual environment exists
|
||||
if (-not (Test-Path $VenvDir)) {
|
||||
Write-Host "Creating virtual environment..."
|
||||
python -m venv $VenvDir
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
Write-Host "Error: Failed to create virtual environment. Make sure python is installed."
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
# Activate virtual environment
|
||||
Write-Host "Activating virtual environment..."
|
||||
$ActivateScript = Join-Path $VenvDir "Scripts\Activate.ps1"
|
||||
& $ActivateScript
|
||||
|
||||
# Install/update dependencies if requirements.txt exists
|
||||
$RequirementsFile = Join-Path $ScriptDir "requirements.txt"
|
||||
if (Test-Path $RequirementsFile) {
|
||||
Write-Host "Installing dependencies..."
|
||||
pip install -q -r $RequirementsFile
|
||||
}
|
||||
|
||||
# Run nnue.py
|
||||
Write-Host "Starting NNUE Training Pipeline..."
|
||||
python (Join-Path $ScriptDir "nnue.py")
|
||||
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
# Setup and run NNUE training pipeline
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
VENV_DIR="$SCRIPT_DIR/.venv"
|
||||
|
||||
# Check if virtual environment exists
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Creating virtual environment..."
|
||||
python3 -m venv "$VENV_DIR"
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Error: Failed to create virtual environment. Make sure python3 is installed."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Activate virtual environment
|
||||
echo "Activating virtual environment..."
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
# Install/update dependencies if requirements.txt exists
|
||||
if [ -f "$SCRIPT_DIR/requirements.txt" ]; then
|
||||
echo "Installing dependencies..."
|
||||
pip install -q -r "$SCRIPT_DIR/requirements.txt"
|
||||
fi
|
||||
|
||||
# Run nnue.py
|
||||
echo "Starting NNUE Training Pipeline..."
|
||||
python "$SCRIPT_DIR/nnue.py"
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"version": 10,
|
||||
"date": "2026-04-14T22:18:38.824577",
|
||||
"num_positions": 3022562,
|
||||
"stockfish_depth": 12,
|
||||
"final_val_loss": 6.248612448225196e-05,
|
||||
"architecture": [
|
||||
768,
|
||||
1536,
|
||||
1024,
|
||||
512,
|
||||
256,
|
||||
1
|
||||
],
|
||||
"device": "cuda",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)",
|
||||
"epochs": 100,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"checkpoint": null
|
||||
}
|
||||
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 1,
|
||||
"date": "2026-04-07T22:56:23.259658",
|
||||
"num_positions": 2086,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 20,
|
||||
"batch_size": 4096,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.016311248764395714,
|
||||
"device": "cuda",
|
||||
"checkpoint": null,
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 2,
|
||||
"date": "2026-04-07T23:50:05.390402",
|
||||
"num_positions": 6886,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 100,
|
||||
"batch_size": 4096,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.007848377339541912,
|
||||
"device": "cuda",
|
||||
"checkpoint": "/mnt/d/Workspaces/NowChessSystems/modules/bot/python/weights/nnue_weights_v1.pt",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 3,
|
||||
"date": "2026-04-08T09:43:28.000579",
|
||||
"num_positions": 71610,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 20,
|
||||
"batch_size": 4096,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.006398905136849695,
|
||||
"device": "cpu",
|
||||
"checkpoint": "/home/janis/Workspaces/IntelliJ/NowChess/NowChessSystems/modules/bot/python/weights/nnue_weights_v2.pt",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 4,
|
||||
"date": "2026-04-09T00:28:07.572209",
|
||||
"num_positions": 2009355,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 40,
|
||||
"batch_size": 4096,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 9.106677896235248e-05,
|
||||
"device": "cuda",
|
||||
"checkpoint": "/mnt/d/Workspaces/NowChessSystems/modules/bot/python/weights/nnue_weights_v3.pt",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 5,
|
||||
"date": "2026-04-09T18:50:27.845632",
|
||||
"num_positions": 2009355,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 100,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 9.180421525105905e-05,
|
||||
"device": "cuda",
|
||||
"checkpoint": null,
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 6,
|
||||
"date": "2026-04-09T21:28:21.000832",
|
||||
"num_positions": 1958728,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 100,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.2984530149085532,
|
||||
"device": "cuda",
|
||||
"checkpoint": "/home/janis/Workspaces/NowChessSystems/modules/bot/python/weights/nnue_weights_v5.pt",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 7,
|
||||
"date": "2026-04-09T22:06:50.439858",
|
||||
"num_positions": 1958728,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 100,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.2997283308762831,
|
||||
"device": "cuda",
|
||||
"checkpoint": null,
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": 8,
|
||||
"date": "2026-04-09T22:22:47.859730",
|
||||
"num_positions": 1958728,
|
||||
"stockfish_depth": 12,
|
||||
"epochs": 100,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"final_val_loss": 0.24803777390839968,
|
||||
"device": "cuda",
|
||||
"checkpoint": "/home/janis/Workspaces/NowChessSystems/modules/bot/python/weights/nnue_weights_v7.pt",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"version": 9,
|
||||
"date": "2026-04-13T20:19:08.123315",
|
||||
"num_positions": 2522562,
|
||||
"stockfish_depth": 12,
|
||||
"final_val_loss": 6.994176222619626e-05,
|
||||
"device": "cuda",
|
||||
"notes": "Win rate vs classical eval: TBD (requires benchmark games)",
|
||||
"mode": "burst",
|
||||
"duration_minutes": 30.0,
|
||||
"epochs_per_season": 50,
|
||||
"early_stopping_patience": 10,
|
||||
"seasons_completed": 3,
|
||||
"batch_size": 16384,
|
||||
"learning_rate": 0.001,
|
||||
"initial_checkpoint": "/home/janis/Workspaces/NowChess/NowChessSystems/modules/bot/python/weights/nnue_weights_v8.pt"
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,21 @@
|
||||
package de.nowchess.bot
|
||||
|
||||
import de.nowchess.api.bot.Bot
|
||||
import de.nowchess.bot.bots.ClassicalBot
|
||||
|
||||
object BotController {
|
||||
|
||||
private val bots: Map[String, Bot] = Map(
|
||||
"easy" -> ClassicalBot(BotDifficulty.Easy),
|
||||
"medium" -> ClassicalBot(BotDifficulty.Medium),
|
||||
"hard" -> ClassicalBot(BotDifficulty.Hard),
|
||||
"expert" -> ClassicalBot(BotDifficulty.Expert),
|
||||
)
|
||||
|
||||
/** Get a bot by name. */
|
||||
def getBot(name: String): Option[Bot] = bots.get(name.toLowerCase)
|
||||
|
||||
/** List all available bot names. */
|
||||
def listBots: List[String] = bots.keys.toList.sorted
|
||||
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
package de.nowchess.bot
|
||||
|
||||
enum BotDifficulty:
|
||||
case Easy
|
||||
case Medium
|
||||
case Hard
|
||||
case Expert
|
||||
@@ -0,0 +1,19 @@
|
||||
package de.nowchess.bot
|
||||
|
||||
import de.nowchess.api.game.GameContext
|
||||
import de.nowchess.api.move.Move
|
||||
|
||||
object BotMoveRepetition:
|
||||
|
||||
private val maxConsecutiveMoves = 3
|
||||
|
||||
def blockedMoves(context: GameContext): Set[Move] = repeatedMove(context).toSet
|
||||
|
||||
def repeatedMove(context: GameContext): Option[Move] =
|
||||
context.moves.takeRight(maxConsecutiveMoves) match
|
||||
case first :: second :: third :: Nil if first == second && second == third => Some(first)
|
||||
case _ => None
|
||||
|
||||
def filterAllowed(context: GameContext, moves: List[Move]): List[Move] =
|
||||
val blocked = blockedMoves(context)
|
||||
moves.filterNot(blocked.contains)
|
||||
@@ -0,0 +1,11 @@
|
||||
package de.nowchess.bot
|
||||
|
||||
object Config:
|
||||
|
||||
/** Threshold in centipawns: if classical evaluation differs from NNUE by more than this, the move is vetoed (not
|
||||
* accepted as a suggestion).
|
||||
*/
|
||||
val VETO_THRESHOLD: Int = 150
|
||||
|
||||
/** Time budget per move for iterative deepening (milliseconds). */
|
||||
val TIME_LIMIT_MS: Long = 2000L
|
||||
@@ -0,0 +1,29 @@
|
||||
package de.nowchess.bot.ai
|
||||
|
||||
import de.nowchess.api.game.GameContext
|
||||
import de.nowchess.api.move.Move
|
||||
|
||||
trait Evaluation:
|
||||
|
||||
def CHECKMATE_SCORE: Int
|
||||
def DRAW_SCORE: Int
|
||||
|
||||
def evaluate(context: GameContext): Int
|
||||
|
||||
// ── Accumulator hooks ─────────────────────────────────────────────────────
|
||||
// Default implementations fall back to full re-evaluation each call.
|
||||
// Override in NNUE-capable evaluators for incremental L1 speedup.
|
||||
|
||||
/** Initialise the accumulator for the root position at ply 0. */
|
||||
def initAccumulator(context: GameContext): Unit = ()
|
||||
|
||||
/** Copy parent ply's accumulator to childPly without move deltas (null-move). */
|
||||
def copyAccumulator(parentPly: Int, childPly: Int): Unit = ()
|
||||
|
||||
/** Derive childPly's accumulator from parentPly by applying move deltas. */
|
||||
def pushAccumulator(childPly: Int, move: Move, parent: GameContext, child: GameContext): Unit = ()
|
||||
|
||||
/** Evaluate from the pre-computed accumulator at ply, using hash for the eval cache. Falls back to full evaluate when
|
||||
* not overridden.
|
||||
*/
|
||||
def evaluateAccumulator(ply: Int, context: GameContext, hash: Long): Int = evaluate(context)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user