23 Commits

Author SHA1 Message Date
21ae2ca91c feat: progress 2026-01-17 14:53:09 -05:00
528956b033 feat: add De Bruijn indexed reduction engine
Add a new interpreter option (-i debruijn) that uses De Bruijn indices
for variable representation, eliminating the need for variable renaming
during substitution.

- Add -i flag to select interpreter (lambda or debruijn)
- Create debruijn package with Expression types (Variable with index,
  Abstraction without parameter, Application)
- Implement shift and substitute operations for De Bruijn indices
- Add conversion functions between lambda and De Bruijn representations
- Update CLI to support switching between interpreters
- Add De Bruijn tests to verify all samples pass

Closes #26
2026-01-16 19:36:05 -05:00
1974ad582f refactor: move event system to reducer, remove engine package (#32)
## Description

This PR completes the MVC-inspired refactoring by moving the event system from the engine into the reducer.
The engine package is now removed entirely, as the reducer handles both reduction logic and lifecycle events.

- Add `pkg/reducer/events.go` with `StartEvent`, `StepEvent`, and `StopEvent`.
- Extend `Reducer` interface to embed `Emitter[Event]` and add `Expression()` method.
- Update `NormalOrderReducer` to embed `BaseEmitter` and emit lifecycle events during reduction.
- Update all plugins to attach to `Reducer` instead of `Engine`.
- Remove `internal/engine` package entirely.
- Add `Off()` method to `BaseEmitter` to complete the `Emitter` interface.
- Fix `Emitter.On` signature to use generic type `E` instead of `string`.

### Decisions

- The `Reducer` interface now combines reduction logic with event emission, making it the single orchestration point.
- Plugins attach directly to the reducer, simplifying the architecture.
- The `Expression()` method on `Reducer` provides access to current state for plugins.

## Benefits

- Simpler architecture with one fewer abstraction layer.
- Plugins are now mode-agnostic - they work with any `Reducer` implementation.
- Cleaner separation: reducers handle reduction, plugins observe via events.
- Easier to add new evaluation modes - just implement `Reducer` with embedded emitter.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #32
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 00:27:36 +00:00
f8e1223463 refactor: extract Reducer interface and update engine to use abstractions (#31)
## Description

This PR builds on #30 to complete the abstraction layer for multi-mode evaluation support.
The engine now accepts abstract `expr.Expression` and `reducer.Reducer` types instead of concrete lambda types.

- Add `pkg/reducer/reducer.go` with `Reducer` interface defining `Reduce(expr.Expression, onStep) expr.Expression`.
- Add `pkg/lambda/reducer.go` with `NormalOrderReducer` that wraps the existing `ReduceAll` logic.
- Update `engine.Engine` to store `expr.Expression` and `reducer.Reducer` instead of `*lambda.Expression`.
- Update plugins to use `expr.Expression.String()` directly (no pointer dereference needed).
- Update main and tests to instantiate `NormalOrderReducer` and pass it to the engine.

### Decisions

- The `Reducer.Reduce` method returns the final expression and calls `onStep` after each reduction step with the current state.
- `NormalOrderReducer` type-asserts to `lambda.Expression` internally; other expression types are returned unchanged.
- The engine updates its `Expression` field both during reduction (via `onStep`) and after completion.

## Benefits

- The engine is now fully decoupled from lambda-specific types.
- New evaluation modes can be added by implementing `expr.Expression` and `reducer.Reducer`.
- Plugins work with any expression type that implements `expr.Expression`.
- Prepares the codebase for SKI combinators, typed lambda calculus, or other future modes.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Closes #30

Reviewed-on: #31
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-16 23:42:07 +00:00
e0114c736d refactor: extract abstract Expression interface (#30)
## Description

The codebase currently couples the engine and plugins directly to `lambda.Expression`.
This PR introduces an abstract `expr.Expression` interface to enable future support for multiple evaluation modes.

- Add `pkg/expr/expr.go` with an `Expression` interface requiring a `String()` method.
- Update `lambda.Expression` to embed `expr.Expression`.
- Add `String()` method to `Abstraction`, `Application`, and `Variable` types.
- Update plugins to use `String()` instead of `lambda.Stringify()`.

### Decisions

- The `expr.Expression` interface is minimal (only `String()`) to avoid over-constraining future expression types.
- The engine still stores `*lambda.Expression` directly rather than `expr.Expression`, because Go's interface semantics require pointer indirection for in-place mutation during reduction.
- Future evaluation modes will implement their own concrete types satisfying `expr.Expression`.

## Benefits

- Establishes a foundation for supporting multiple evaluation modes (SKI combinators, typed lambda calculus, etc.).
- Plugins now use the abstract `String()` method, making them more decoupled from the lambda-specific implementation.
- Prepares the codebase for a Reducer interface abstraction in a future PR.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #30
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-16 23:37:31 +00:00
5c54f4e195 fix: correct event handler registration in plugins (#29)
## Description

This PR fixes incorrect event handler registration in two plugins that were introduced in the refactoring.
The bugs prevented the plugins from functioning as intended.

Fixed issues:
- Statistics plugin was registering `plugin.Step` for `StopEvent` instead of `plugin.Stop`, preventing statistics from being printed at the end of execution.
- Logs plugin was listening to `StopEvent` instead of `StepEvent`, causing it to log only once at the end instead of on each reduction step.

## Benefits

Statistics are now correctly printed at the end of execution.
Debug logs now correctly show each reduction step instead of just the final state.
Plugins now work as originally intended before the refactoring.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #29
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-14 00:35:02 +00:00
307b7ffd1e refactor: replace string-based emitter with type-safe generic event system (#28)
## Description

This PR refactors the event emitter system from a string-based message passing approach to a type-safe generic implementation using typed events.
The previous system relied on string message names which were error-prone and lacked compile-time safety.
This refactoring introduces a generic `BaseEmitter[E comparable]` that provides type safety while consolidating the various tracker packages into a unified plugins architecture.

Key changes:
- Replace `Emitter` with generic `BaseEmitter[E comparable]` for type-safe event handling.
- Add `Event` type enumeration with `StartEvent`, `StepEvent`, and `StopEvent` constants.
- Create `Listener[E]` interface with `BaseListener` implementation for better abstraction.
- Consolidate `explanation`, `performance`, and `statistics` packages into unified `internal/plugins` package.
- Simplify CLI initialization by using plugin constructors that handle their own event subscriptions.
- Add `Items()` iterator method to `Set` for idiomatic Go 1.23+ range loops over sets.

### Decisions

Use generics for type-safe event handling.
This provides compile-time guarantees that event types match their handlers while maintaining flexibility for future event types.

Consolidate trackers into plugins architecture.
Previously separate packages (`explanation`, `performance`, `statistics`) now live under `internal/plugins`, making the plugin pattern explicit and easier to extend.

Plugin constructors self-register with engine.
Each plugin's `New*` constructor now handles its own event subscriptions, reducing boilerplate in the main CLI.

## Benefits

Type safety prevents runtime errors from typos in event names.
The compiler now catches mismatched event types at compile time rather than failing silently at runtime.

Cleaner plugin architecture makes adding new features easier.
New plugins follow a consistent pattern and live in a single location.

Reduced boilerplate in main CLI.
Plugin initialization is now a single function call rather than manual event registration.

Better testability through interface-based design.
The `Listener[E]` interface allows for easier mocking and testing of event handlers.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #28
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-14 00:30:21 +00:00
335ce95c50 feat: add comment support to saccharine language (#25)
## Description

The saccharine language previously lacked comment support, preventing proper code documentation.
This PR implements '#' comment syntax similar to Python.
Comments can appear on their own line or at the end of a line, with all content after '#' ignored until the next newline or EOF.
The tokenizer now detects '#' and skips characters appropriately without creating tokens.

### Decisions

Comments are silently consumed during tokenization rather than being preserved as tokens, keeping the token stream clean for the parser.
The implementation preserves newlines after comments by using the iterator's Back() method, allowing them to be processed as soft breaks.

## Benefits

Developers can now document their saccharine code with inline and full-line comments.
The implementation is minimal and efficient, adding no overhead to the token stream.
Tests verify that comments work correctly in various positions without breaking code execution.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Closes #24

Reviewed-on: #25
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 02:00:00 +00:00
19652563a4 docs: add issue linking instructions to PR workflow (#23)
## Description

This PR updates CLAUDE.md to include documentation about linking issues in pull requests.
The documentation explains how to use the "Closes #<number>" syntax in commit messages and PR descriptions to automatically link and close issues when PRs are merged.

### Changes

- Added "Linking issues" section to Pull Request Management in [CLAUDE.md](CLAUDE.md:81-82).
- Documented the use of `Closes #<number>` syntax for automatic issue closure.

## Benefits

Developers will know to properly link issues in their PRs.
Issues will be automatically closed when their corresponding PRs are merged.
The repository will maintain better traceability between issues and PRs.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #23
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:45:53 +00:00
dec9af0244 feat: add issue templates for Gitea (#22)
## Description

This PR implements issue templates for the lambda repository to improve issue tracking and developer experience.
The templates follow Gitea best practices and provide clear guidance for users when creating issues.
Three distinct templates were created to handle different types of issues: feature requests, bug reports, and general issues.

### Changes

- Created [.gitea/ISSUE_TEMPLATE/feature.md](.gitea/ISSUE_TEMPLATE/feature.md) for feature requests and enhancements.
- Created [.gitea/ISSUE_TEMPLATE/bug.md](.gitea/ISSUE_TEMPLATE/bug.md) for bug reports.
- Created [.gitea/ISSUE_TEMPLATE/general.md](.gitea/ISSUE_TEMPLATE/general.md) for general issues.
- Updated [CLAUDE.md](CLAUDE.md) with issue management and workflow documentation.

### Decisions

All templates use Markdown format with YAML frontmatter, following the existing PR template style.
Each template includes pre-populated metadata (title prefix, labels, target branch).
Templates provide inline HTML comments to guide users without cluttering the final issue.
The templates are consistent with conventional commit format (feat:, fix:, etc.).

## Benefits

Users will have clear guidance when creating issues, reducing back-and-forth communication.
Issues will be more structured and contain necessary information for developers.
Pre-populated labels and titles ensure consistent issue categorization.
The templates align with the existing PR template style for a cohesive experience.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #22
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:43:36 +00:00
dbc3c5a8d4 Improve testing infrastructure with dynamic discovery and validation (#20)
## Summary

This PR enhances the testing infrastructure with dynamic test discovery, automated validation, and improved error handling.

## Changes

### Testing Infrastructure
- Added `TestSamplesValidity` integration test that validates all test files against their expected output.
- Implemented dynamic test discovery using `filepath.Glob` to automatically find all `.test` files.
- Renamed `benchmark_test.go` to `lambda_test.go` for better naming consistency.
- Consolidated helper functions into a single `runSample` function.
- Replaced all error handling with `assert` for consistent and clear test output.
- Required all `.test` files to have corresponding `.expected` files.

### Iterator Improvements
- Added `Swap` method to iterator for better reduction algorithm.
- Improved reduction algorithm with LIFO-based iterator implementation.

### Build System
- Added `make test` target to run tests without benchmarks.
- Updated Makefile help text to include the new test target.

### Test Cases
- Added new test cases with expected outputs: `church_5^5`, `church_6^6`, `fast_list_2^30`, `list_2^30`.
- Added validation files for all test cases.

## Test plan

- Run tests with expected output validation.
- Run benchmarks to ensure performance is maintained.
- Verify make targets work correctly.

Reviewed-on: #20
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:20:47 +00:00
15c904ccc9 feat: improve reduction algorithm with LIFO-based iterator (#15)
## Description

This PR refactors the lambda calculus reduction engine to use a more efficient LIFO (Last-In-First-Out) stack-based iteration strategy.
Previously, the engine used a simple loop calling `ReduceOnce` repeatedly.
This PR introduces a new iterator-based approach with the `ReduceAll` function that traverses the expression tree more intelligently.

Changes include:

- Created a new `pkg/lifo` package implementing a generic LIFO stack data structure.
- Added `pkg/lambda/iterator.go` with an `Iterator` type for traversing lambda expressions.
- Refactored `pkg/lambda/reduce.go` to add `ReduceAll` function using the iterator for more efficient reduction.
- Updated `internal/engine/engine.go` to use `ReduceAll` instead of looping `ReduceOnce`.
- Renamed sample test files from `.txt` to `.test` extension.
- Fixed `.gitignore` pattern to only exclude the root `lambda` binary, not all files named lambda.
- Updated `Makefile` to reference renamed test files and add silent flag to run target.

### Decisions

- Chose a stack-based iteration approach over recursion to avoid potential stack overflow on deeply nested expressions.
- Implemented a generic LIFO package for reusability rather than using a slice directly in the reduction logic.
- Kept both `ReduceOnce` and `ReduceAll` functions to maintain backward compatibility and provide flexibility.

## Performance

Benchmark results comparing main branch vs this PR on Apple M3:

| Test | Before (ms/op) | After (ms/op) | Change |
|------|----------------|---------------|--------|
| Thunk | 0.014 | 0.014 | 0.00% |
| Fast | 1.29 | 1.20 | **-7.04%** |
| Simple | 21.51 | 6.45 | **-70.01%** |
| Church | 157.67 | 43.00 | -76.788% |
| Saccharine | 185.25 | 178.99 | **-3.38%** |

**Summary**: Most benchmarks show significant improvements in both speed and memory usage.
The Church benchmark shows a regression that needs investigation.

## Benefits

- More efficient expression tree traversal with the iterator pattern.
- Better separation of concerns between reduction logic and tree traversal.
- Generic LIFO stack can be reused in other parts of the codebase.
- Cleaner engine implementation with callback-based step emission.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #15
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-12 02:16:07 +00:00
609fe05250 feat: add benchmark target to Makefile (#14)
## Description

This PR adds benchmarking capabilities to the lambda interpreter.
The benchmarks measure performance across all sample files in the samples folder.
This enables consistent performance testing and helps track optimization improvements over time.

Changes in this PR:
- Added new `bench` target to Makefile for running Go benchmarks.
- Created `benchmark_test.go` with sub-benchmarks for each sample file (Church, Fast, Saccharine, Simple, Thunk).
- Used `b.Run` for organizing sample-specific sub-benchmarks and `b.Loop` for efficient iteration.
- Configured benchmarks to use fixed iterations (10x) and 4 CPU cores for reproducible results.

### Decisions

Used `b.Loop()` instead of traditional `for i := 0; i < b.N; i++` pattern.
This is the modern Go benchmarking idiom that provides better performance measurement.

Benchmarks run the full pipeline (parse, compile, execute, stringify) to measure end-to-end performance for each sample.

## Benefits

Provides quantitative performance metrics for the lambda interpreter.
Enables tracking performance improvements or regressions across different sample complexities.
Consistent benchmark configuration (fixed iterations, CPU cores) ensures reproducible results for comparison.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #14
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 22:48:26 +00:00
0eff85f8fa feat: add output flag (#13)
## Description

The lambda CLI previously only wrote output to stdout using shell redirection.
This PR adds support for writing results to files using the `-o` flag.
This is implemented using a new `Destination` interface that mirrors the existing `Source` pattern.

Changes:
- Added `Destination` interface with `StdoutDestination` and `FileDestination` implementations.
- Added `-o` flag to CLI argument parser for output file specification.
- Updated `Config` to use `Destination` instead of direct output handling.
- Refactored main to use `Destination.Write()` for result output.
- Updated Makefile targets (`run`, `profile`, `explain`) to use `-o` flag instead of shell redirection.

### Decisions

The `-o` flag defaults to stdout when not specified or when set to `-`.
This maintains backward compatibility while providing explicit file output capability.

## Benefits

- Cleaner command-line interface without shell redirection.
- Symmetric design with `Source` interface for input.
- More portable across different shells and environments.
- Explicit output handling improves code clarity.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #13
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 22:14:48 +00:00
62699a0e37 feat: add file input flag (#12)
## Description

The lambda CLI previously only supported inline string expressions and stdin input.
This PR adds support for reading lambda expressions from files using the `-f` flag.
This makes it easier to work with larger programs stored in files.

Changes:
- Added `FileSource` type to `internal/config/source.go` for reading from file paths.
- Added `-f` flag to CLI argument parser with validation to prevent conflicting inputs.
- Updated Makefile targets (`run`, `profile`, `explain`) to use `-f` flag instead of stdin redirection.

### Decisions

The `-f` flag takes precedence over positional arguments.
If both are specified, an error is returned to avoid ambiguity.

## Benefits

- More intuitive workflow for file-based lambda programs.
- Cleaner Makefile targets without stdin redirection.
- Consistent with common CLI conventions (e.g., `grep -f`, `awk -f`).

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #12
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 21:43:11 +00:00
90c205db2e refactor: improve Makefile structure and usability (#11)
## Description

This PR refactors the Makefile to improve usability, maintainability, and cross-platform compatibility.
The changes modernize the Makefile structure and make it more user-friendly.

Changes made:
- Renamed the `it` target to `build` for better clarity and conventional naming.
- Added a `help` target as the default goal to display available targets and their descriptions.
- Moved the TEST variable to the top with other configuration variables for better organization.
- Updated binary name from `lambda.exe` to `lambda` and used `${BINARY_NAME}` variable consistently throughout.
- Replaced all `@` prefixes with the `.SILENT:` directive for cleaner syntax.
- Added a `clean` target to remove all build artifacts (binary, program.out, profile directory).
- Made the `graph` target cross-platform by replacing macOS-specific `open` command with file:// URL echo.
- Updated .gitignore to include the `lambda` binary.

### Decisions

- Used `.SILENT:` directive instead of individual `@` prefixes for a cleaner, more maintainable Makefile.
- Made `help` the default target so users can run `make` without arguments to see available commands.
- Removed platform-specific commands (like `open`) in favor of cross-platform alternatives.

## Benefits

- Improved discoverability: Users can run `make` to see all available targets.
- Better maintainability: Using `${BINARY_NAME}` variable consistently makes future changes easier.
- Cross-platform compatibility: Removed macOS-specific commands.
- Cleaner syntax: `.SILENT:` directive eliminates repetitive `@` prefixes.
- More conventional: Renamed `it` to `build` follows standard Makefile conventions.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (\`<type>/<description>\`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #11
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 21:13:41 +00:00
72a0afbbc0 perf: implement structural sharing for expression trees (#10)
## Description

The profiler revealed that 75% of CPU time was spent on memory allocation, with the primary bottleneck being expression copying during variable substitution. Every time a variable was substituted with an expression, `replacement.Copy()` would create a full deep copy of the entire expression tree.

This PR refactors the lambda calculus interpreter from a mutable, pointer-based implementation to an immutable, structurally-shared implementation. Expressions are now immutable value types that share unchanged subtrees instead of copying them.

**Key changes:**
- Made expression fields unexported to enforce immutability.
- Converted `Substitute()` and `Rename()` from in-place mutation to functional methods that return new expressions.
- Implemented structural sharing: methods return the same pointer when nothing changes.
- Removed `Copy()` method entirely - no more deep copying during substitution.
- Added getter methods for accessing expression fields from outside the package.

### Decisions

**Immutability over mutation:** Switched from mutable `*Expression` pointers with in-place updates to immutable expressions that return new trees. This is a fundamental architectural shift but aligns with functional programming principles and enables structural sharing.

**Structural sharing strategy:** When `Substitute()` or `Rename()` encounters an unchanged subtree, it returns the original pointer instead of creating a new object. This is safe because expressions are now immutable.

**Field encapsulation:** Made all expression fields unexported (`Parameter` → `parameter`, `Body` → `body`, etc.) to prevent external mutation. Added getter methods for controlled access.

## Benefits

**Performance improvements** (measured across all samples):

| Sample      | Before CPU | After CPU | Improvement | Copy Overhead Eliminated |
|-------------|-----------|----------|-------------|--------------------------|
| **saccharine** | 320ms | 160ms | **50% faster** | 50ms (15.6% of total) |
| **church** | 230ms | 170ms | **26% faster** | 40ms (17.4% of total) |
| **simple** | 30ms | 20ms | **33% faster** | 10ms (33.3% of total) |

**Wall-clock improvements:**
- saccharine: 503ms → 303ms (40% faster)
- church: 404ms → 302ms (25% faster)

**Memory allocation eliminated:**
- Before: `runtime.mallocgcSmallScanNoHeader` consumed 10-50ms per sample
- After: **Completely eliminated from profile** 
- All `Copy()` method calls removed from hot path

**The optimization in action:**

Before:
```go
func Substitute(e *Expression, target string, replacement Expression) {
    switch typed := (*e).(type) {
    case *Variable:
        if typed.Value == target {
            *e = replacement.Copy()  // Deep copy entire tree!
        }
    }
}
```

After:
```go
func (v *Variable) Substitute(target string, replacement Expression) Expression {
    if v.value == target {
        return replacement  // Share pointer directly, no allocation
    }
    return v  // Unchanged, share self
}
```

**Codebase improvements:**
- More idiomatic functional programming style.
- Immutability prevents entire class of mutation bugs.
- Clearer ownership semantics (expressions are values, not mutable objects).
- Easier to reason about correctness (no action at a distance).

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`perf/structural-sharing`).
- [x] Tests pass (no test files exist, but build succeeds and profiling confirms correctness).
- [x] Documentation updated (added comments explaining structural sharing).

Reviewed-on: #10
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 02:15:38 +00:00
22019acbb1 feat: add PHONY declarations and docs target to Makefile (#9)
---
name: "Default Template"
about: "The default template for `lambda`."
title: "feat: add PHONY declarations and docs target to Makefile"
ref: "main"
assignees: []
labels: []
---

## Description

This PR improves the Makefile by adding proper `.PHONY` declarations and a new `docs` target.
The `.PHONY` declarations ensure make properly handles targets that don't create files.
The `docs` target provides an easy way to view Go package documentation locally using godoc.

Changes:
- Add `.PHONY` declaration for all non-file targets (`it`, `run`, `profile`, `explain`, `graph`, `docs`).
- Add `docs` target that runs godoc server on port 6060 with helpful usage message.

### Decisions

None.

## Benefits

- Prevents make from getting confused if files with target names exist.
- Provides convenient documentation viewing with `make docs`.
- Improves Makefile maintainability following best practices.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #9
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 23:29:30 +00:00
d831312dc3 feat: complete PR template frontmatter fields (#8)
## Description

Adds all recommended frontmatter fields to the Gitea pull request template.
This includes `about`, `ref`, `assignees`, and `labels` fields in addition to the required `name` and `title` fields.
These fields provide better template documentation and default PR settings.

## Benefits

- The PR template now includes a description (`about`) explaining its purpose.
- Default target branch is explicitly set to `main` via the `ref` field.
- Template follows Gitea best practices with all available frontmatter fields.
- Provides a complete, well-documented template structure for contributors.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #8
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 23:00:32 +00:00
c0353c8e1f feat: add required name field to PR template (#7)
## Description

Gitea requires a `name` field in the frontmatter of pull request templates.
This PR adds the required `name: "Default Template"` field to fix the template validation error.
Additionally updates CLAUDE.md to reference the PR template format and use lowercase for PR title placeholder.

## Benefits

- The pull request template is now valid and will be properly loaded by Gitea.
- Users creating PRs will see the template automatically populated.
- CLAUDE.md now documents the PR template format for consistency.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #7
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:54:43 +00:00
e63c0df410 fix: move PR template to correct Gitea location (#6)
## Description

Gitea requires the pull request template to be at `.gitea/PULL_REQUEST_TEMPLATE.md` (as a file), not in a `.gitea/PULL_REQUEST_TEMPLATE/` directory.
This PR moves the template from `.gitea/PULL_REQUEST_TEMPLATE/default.md` to `.gitea/PULL_REQUEST_TEMPLATE.md` to ensure it works correctly with Gitea.

## Benefits

- The pull request template will now be automatically loaded when creating new PRs in Gitea.
- Follows Gitea's documented convention for PR templates.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #6
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:49:24 +00:00
8b6c632e4d feat: add pull request template (#5)
## Description

Added a standardized pull request template to improve PR quality and consistency. The template provides a clear structure for contributors to follow when creating PRs.

The template includes:
- Title placeholder following conventional commit format (`<type>: <description>`)
- Description section with guidance on context, rationale, and changes
- Optional Decisions section for architectural choices
- Benefits section to highlight improvements
- Checklist for conventional commits, branch naming, tests, and documentation

### Decisions

Created the template in `.gitea/PULL_REQUEST_TEMPLATE/default.md` to support multiple template types in the future.

## Benefits

- Ensures PRs follow project conventions (conventional commits, branch naming)
- Provides clear structure for describing changes
- Helps reviewers understand context and benefits
- Reduces back-and-forth by reminding contributors of requirements
- Supports future expansion with additional template types

Reviewed-on: #5
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:44:41 +00:00
7a5944594f docs: add project guidelines for Claude Code (#4)
## Summary
- Add CLAUDE.md with project guidelines
- Documents conventional commit format and types
- Specifies branch naming conventions
- Includes PR management instructions using tea CLI

## Test plan
- Review CLAUDE.md content
- Verify formatting and accuracy

Reviewed-on: #4
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:32:15 +00:00
66 changed files with 1773 additions and 474 deletions

View File

@@ -0,0 +1,58 @@
---
name: "Bug Report"
about: "Report a bug or unexpected behavior in the lambda interpreter."
title: "fix: "
ref: "main"
assignees: []
labels:
- bug
---
## Context
<!--
Describe what you were trying to do when you encountered the bug.
Explain what you expected to happen.
-->
## Current Behavior
<!--
Describe what actually happened.
Be specific about the incorrect behavior or error.
-->
## Steps to Reproduce
<!--
Provide step-by-step instructions to reproduce the issue.
Include any relevant code, commands, or input.
-->
1.
2.
3.
## Expected Behavior
<!--
Describe what should happen instead.
-->
## Environment
<!--
Provide relevant information about your environment.
-->
- Lambda version:
- Go version:
- Operating system:
## Additional Context
<!--
Add any other context about the problem.
Include error messages, logs, or screenshots if applicable.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,44 @@
---
name: "Feature Request"
about: "Suggest a new feature or enhancement for the lambda interpreter."
title: "feat: "
ref: "main"
assignees: []
labels:
- enhancement
---
## Context
<!--
Describe the problem or limitation you're encountering.
Explain why this feature would be valuable.
-->
## Proposed Solution
<!--
Describe your proposed solution or enhancement.
Be specific about what you want to see implemented.
-->
## Alternatives Considered
<!--
List any alternative solutions or approaches you've considered.
If none exist, omit this section.
-->
## Acceptance Criteria
<!--
List clear, testable criteria that define when this feature is complete.
Use bullet points starting with •
-->
## Additional Context
<!--
Add any other context, screenshots, or examples about the feature request.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,37 @@
---
name: "General Issue"
about: "Create an issue that doesn't fit other templates."
title: ""
ref: "main"
assignees: []
labels: []
---
## Context
<!--
Describe the background and context for this issue.
Explain why this issue exists.
-->
## Description
<!--
Provide a detailed description of what needs to be done.
Be clear and specific about the requirements.
-->
## Acceptance Criteria
<!--
List clear, testable criteria that define when this issue is complete.
Use bullet points starting with •
If none exist, omit this section.
-->
## Additional Context
<!--
Add any other relevant information, links, or references.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,37 @@
---
name: "Default Template"
about: "The default template for `lambda`."
title: "<type>: <description>"
ref: "main"
assignees: []
labels: []
---
## Description
<!--
First, describe the context for the PR.
Then, explain why the PR exists.
Finally, in concise, sentence-long bullets, explain each change.
-->
### Decisions
<!--
List any major architectural decisions here.
If none exist, omit this section.
-->
## Benefits
<!--
List any major benefits here.
How would this PR improve the codebase/product?
-->
## Checklist
- [ ] Code follows conventional commit format.
- [ ] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

7
.gitignore vendored
View File

@@ -3,18 +3,13 @@
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
/lambda
*.exe
*.exe~
*.dll
*.so
*.dylib
# Build artifacts
lambda
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out

View File

@@ -1,83 +1,230 @@
---
# golangci-lint configuration file made by @ccoVeille
# Source: https://github.com/ccoVeille/golangci-lint-config-examples/
# Author: @ccoVeille
# License: MIT
# Variant: 03-safe
# Version: v2.0.0
#
version: "2"
formatters:
enable:
# format the code
- gofmt
# format the block of imports
- gci
settings:
# format the code with Go standard library
gofmt:
# simplify the code
# https://pkg.go.dev/cmd/gofmt#hdr-The_simplify_command
simplify: true
rewrite-rules:
# replace `interface{}` with `any` in the code on format
- pattern: 'interface{}'
replacement: 'any'
gci:
# make sure imports are always in a deterministic order
# https://github.com/daixiang0/gci/
gci: # define the section orders for imports
sections:
# Standard section: captures all standard packages.
- standard
# Default section: catchall that is not standard or custom
- default
# linters that related to local tool, so they should be separated
- localmodule
linters:
exclusions:
# these presets where present in the v1 version of golangci-lint
# it's interesting to keep them when migrating, but removing them should be the goal
presets:
# exclude check on comments format in godoc
# These are common false positives in poor code
# you should not use this on recent code you write from scratch
# More information: https://golangci-lint.run/usage/false-positives/#comments
#
# Please uncomment the following line if your code is not using the godoc format
- comments
# Common false positives
# feel free to remove this if you don't have any false positives
# More information: https://golangci-lint.run/usage/false-positives/#common-false-positives
- common-false-positives
# Legacy preset is not recommended anymore
# More information: https://golangci-lint.run/usage/false-positives/#legacy
- legacy
# std-error-handling is a set of rules that avoid reporting unhandled errors on common functions/methods
# More information: https://golangci-lint.run/usage/false-positives/#std-error-handling
- std-error-handling
# some linters are enabled by default
# https://golangci-lint.run/usage/linters/
#
# enable some extra linters
enable:
# Errcheck is a program for checking for unchecked errors in Go code.
- errcheck
# Vet examines Go source code and reports suspicious constructs.
- govet
# Detects when assignments to existing variables are not used.
- ineffassign
# It's a set of rules from staticcheck. See https://staticcheck.io/
- staticcheck
# Checks Go code for unused constants, variables, functions and types.
- unused
# Fast, configurable, extensible, flexible, and beautiful linter for Go.
# Drop-in replacement of golint.
- revive
# make sure to use t.Helper() when needed
- thelper
# mirror suggests rewrites to avoid unnecessary []byte/string conversion
- mirror
# detect the possibility to use variables/constants from the Go standard library.
- usestdlibvars
# Finds commonly misspelled English words.
- misspell
# Checks for duplicate words in the source code.
- dupword
# linter to detect errors invalid key values count
- loggercheck
# detect when a package or method could be replaced by one from the standard library
- exptostd
# detects nested contexts in loops or function literals
- fatcontext
# Reports uses of functions with replacement inside the testing package.
- usetesting
settings:
revive:
rules:
# these are the default revive rules
# you can remove the whole "rules" node if you want
# BUT
# ! /!\ they all need to be present when you want to add more rules than the default ones
# otherwise, you won't have the default rules, but only the ones you define in the "rules" node
# Blank import should be only in a main or test package, or have a comment justifying it.
- name: blank-imports
# context.Context() should be the first parameter of a function when provided as argument.
- name: context-as-argument
arguments:
- allowTypesBefore: "*testing.T"
# Basic types should not be used as a key in `context.WithValue`
- name: context-keys-type
# Importing with `.` makes the programs much harder to understand
- name: dot-imports
# Empty blocks make code less readable and could be a symptom of a bug or unfinished refactoring.
- name: empty-block
# for better readability, variables of type `error` must be named with the prefix `err`.
- name: error-naming
# for better readability, the errors should be last in the list of returned values by a function.
- name: error-return
# for better readability, error messages should not be capitalized or end with punctuation or a newline.
- name: error-strings
# report when replacing `errors.New(fmt.Sprintf())` with `fmt.Errorf()` is possible
- name: errorf
# check naming and commenting conventions on exported symbols.
- name: exported
arguments:
# make error messages clearer
- "sayRepetitiveInsteadOfStutters"
# incrementing an integer variable by 1 is recommended to be done using the `++` operator
- name: increment-decrement
# highlights redundant else-blocks that can be eliminated from the code
# - name: indent-error-flow
# This rule suggests a shorter way of writing ranges that do not use the second value.
- name: range
# receiver names in a method should reflect the struct name (p for Person, for example)
- name: receiver-naming
# redefining built in names (true, false, append, make) can lead to bugs very difficult to detect.
- name: redefines-builtin-id
# redundant else-blocks that can be eliminated from the code.
# - name: superfluous-else
# prevent confusing name for variables when using `time` package
- name: time-naming
# warns when an exported function or method returns a value of an un-exported type.
- name: unexported-return
# spots and proposes to remove unreachable code. also helps to spot errors
- name: unreachable-code
# Functions or methods with unused parameters can be a symptom of an unfinished refactoring or a bug.
- name: unused-parameter
# report when a variable declaration can be simplified
- name: var-declaration
# warns when initialism, variable or package naming conventions are not followed.
- name: var-naming
misspell:
# Correct spellings using locale preferences for US or UK.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
# Default ("") is to use a neutral variety of English.
locale: US
# List of words to ignore
# among the one defined in https://github.com/golangci/misspell/blob/master/words.go
ignore-rules: []
# - valor
# - and
# Extra word corrections.
extra-words: []
# - typo: "whattever"
# correction: "whatever"
output:
# Order to use when sorting results.
# Possible values: `file`, `linter`, and `severity`.
#
# If the severity values are inside the following list, they are ordered in this order:
# 1. error
# 2. warning
# 3. high
# 4. medium
# 5. low
# Either they are sorted alphabetically.
#
# Default: ["file"]
sort-order:
- linter
- severity
- file
- file # filepath, line, and column.

View File

@@ -1,3 +0,0 @@
{
"makefile.configureOnOpen": false
}

View File

@@ -1,6 +1,12 @@
# Guide To `lambda`
## Coding Styles
## Documentation Style
Use full sentences.
Every sentence gets its own line in Markdown.
Every sentence ends in a period.
## Coding Style
### Conventional Commits
@@ -14,11 +20,13 @@ Use conventional commit format: `<type>: <description>`.
- `fix: correct variable renaming in nested abstractions`
- `docs: update Makefile documentation`
DO NOT advertise Claude.
### Branch Names
Use format: `<type>/<description>` with kebab-case.
**Types**: Same as commits: `feat`, `fix`, `docs`, `refactor`, `test`, `chore`, `perf`
**Types**: Same as commits: `feat`, `fix`, `docs`, `refactor`, `test`, `chore`, `perf`.
**Examples**:
@@ -27,16 +35,71 @@ Use format: `<type>/<description>` with kebab-case.
- `docs/makefile-improvements`
- `refactor/silent-directive`
DO NOT advertise Claude.
## Issue Management
Use the `tea` CLI (Gitea command-line tool) for issue operations.
**Common commands**:
- `tea issues list` - List all issues.
- `tea issues <number>` - View details of a specific issue.
- `tea issues create --title "<title>" --body "<description>"` - Create a new issue.
- `tea issues close <number>` - Close an issue.
**Reading issues**: Use `tea issues <number>` to read the full details of an issue before starting work.
## Issue Workflow
When working on an issue:
1. Read the issue using `tea issues <number>` to understand requirements.
2. Create a feature branch following the branch naming convention.
3. Make commits following the conventional commit format.
4. Submit a pull request when ready.
**Important**: Never commit directly to `main`.
All work must be done in a feature branch and submitted via pull request.
## Pull Request Management
Use the `tea` CLI (Gitea command-line tool) for PR operations instead of `gh`.
**Common commands**:
- `tea pr create` - Create a new pull request
- `tea pr list` - List pull requests
- `tea pr checkout <number>` - Check out a PR locally
- `tea pr close <number>` - Close a pull request
- `tea pr merge <number>` - Merge a pull request
- `tea pr create --title "<title>" --description "<body>"` - Create a new pull request.
- `tea pr list` - List pull requests.
- `tea pr checkout <number>` - Check out a PR locally.
- `tea pr close <number>` - Close a pull request.
- `tea pr merge <number>` - Merge a pull request.
**Creating PRs**: Always create PRs to the `main` branch unless specified otherwise
**Note**: Use `--description` (not `--body`) for PR body content.
**Creating PRs**: Always create PRs in a branch other than `main`, to the `main` branch unless specified otherwise. ALWAYS FOLLOW THE PR TEMPLATE, EXACTLY.
**Linking issues**: When a PR solves an issue, reference the issue in both the commit message and PR description using `Closes #<number>`.
This automatically links and closes the issue when the PR is merged.
### Updating PRs
When pushing additional changes to an existing PR, add a comment summarizing the new commits.
This keeps reviewers informed of what changed since the initial PR description.
Use the `tea` CLI to add comments to pull requests:
```bash
tea comment <number> "Comment text"
```
#### Examples
```bash
# Add a comment to PR #42
tea comment 42 "Updated implementation based on feedback"
# Add a multi-line comment
tea comment 42 "Summary of changes:
- Fixed bug in reducer
- Added new tests"
```

View File

@@ -1,40 +1,51 @@
BINARY_NAME=lambda
TEST=simple
.PHONY: help build run profile explain graph docs test bench clean
.DEFAULT_GOAL := help
.SILENT:
.PHONY: help it run profile explain graph clean
help:
echo "Available targets:"
echo " it - Build the lambda binary"
echo " run - Build and run with sample input (default: simple.txt)"
echo " build - Build the lambda executable"
echo " run - Build and run the lambda interpreter (use TEST=<name> to specify sample)"
echo " profile - Build and run with CPU profiling enabled"
echo " explain - Run with explanation mode and profiling"
echo " graph - Generate CPU profile visualization"
echo " clean - Remove build artifacts"
echo ""
echo "Usage: make run TEST=<sample-name>"
echo " explain - Build and run with explanation mode and profiling"
echo " graph - Generate and open CPU profile visualization"
echo " docs - Start local godoc server on port 6060"
echo " test - Run tests for all samples"
echo " bench - Run benchmarks for all samples"
echo " clean - Remove all build artifacts"
it:
build:
go build -o ${BINARY_NAME} ./cmd/lambda
chmod +x ${BINARY_NAME}
run: it
./lambda - < ./samples/$(TEST).txt > program.out
run: build
./${BINARY_NAME} -s -f ./tests/$(TEST).test -o program.out
profile: it
mkdir -p profile
./lambda -p profile/cpu.prof - < ./samples/$(TEST).txt > program.out
profile: build
./${BINARY_NAME} -p profile/cpu.prof -f ./tests/$(TEST).test -o program.out
explain: it
mkdir -p profile
./lambda -x -p profile/cpu.prof - < ./samples/$(TEST).txt > program.out
explain: build
./${BINARY_NAME} -x -p profile/cpu.prof -f ./tests/$(TEST).test -o program.out > explain.out
graph: profile
graph:
go tool pprof -raw -output=profile/cpu.raw profile/cpu.prof
go tool pprof -svg profile/cpu.prof > profile/cpu.svg
echo "Profile graph saved to 'file://profile/cpu.svg'"
echo ">>> View at 'file://$(PWD)/profile/cpu.svg'"
docs:
echo ">>> View at 'http://localhost:6060/pkg/git.maximhutz.com/max/lambda/'"
go run golang.org/x/tools/cmd/godoc@latest -http=:6060
test:
go test -v ./cmd/lambda
bench:
go test -bench=. -benchtime=10x -cpu=4 ./cmd/lambda
clean:
rm -f ${BINARY_NAME} program.out
rm -f ${BINARY_NAME}
rm -f program.out
rm -rf profile/

View File

@@ -1,17 +1,15 @@
package main
import (
"fmt"
"os"
"git.maximhutz.com/max/lambda/internal/cli"
"git.maximhutz.com/max/lambda/internal/config"
"git.maximhutz.com/max/lambda/internal/engine"
"git.maximhutz.com/max/lambda/internal/explanation"
"git.maximhutz.com/max/lambda/internal/performance"
"git.maximhutz.com/max/lambda/internal/statistics"
"git.maximhutz.com/max/lambda/internal/plugins"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/reducer"
"git.maximhutz.com/max/lambda/pkg/saccharine"
)
@@ -35,42 +33,53 @@ func main() {
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
logger.Info("compiled λ expression", "tree", lambda.Stringify(compiled))
logger.Info("compiled λ expression", "tree", compiled.String())
// Create reduction engine.
process := engine.New(options, &compiled)
// Create reducer based on the selected interpreter.
var red reducer.Reducer
switch options.Interpreter {
case config.DeBruijnInterpreter:
dbExpr := convert.LambdaToDeBruijn(compiled)
logger.Info("converted to De Bruijn", "tree", dbExpr.String())
red = debruijn.NewNormalOrderReducer(&dbExpr)
default:
red = lambda.NewNormalOrderReducer(&compiled)
}
// If the user selected to track CPU performance, attach a profiler to the
// process.
// If the user selected to track CPU performance, attach a profiler.
if options.Profile != "" {
profiler := performance.Track(options.Profile)
process.On("start", profiler.Start)
process.On("end", profiler.End)
plugins.NewPerformance(options.Profile, red)
}
// If the user selected to produce a step-by-step explanation, attach an
// observer here.
// observer.
if options.Explanation {
explanation.Track(process)
plugins.NewExplanation(red)
}
// If the user opted to track statistics, attach a tracker here, too.
// If the user opted to track statistics, attach a tracker.
if options.Statistics {
statistics := statistics.Track()
process.On("start", statistics.Start)
process.On("step", statistics.Step)
process.On("end", statistics.End)
plugins.NewStatistics(red)
}
// If the user selected for verbose debug logs, attach a reduction tracker.
if options.Verbose {
process.On("step", func() {
logger.Info("reduction", "tree", lambda.Stringify(compiled))
})
plugins.NewLogs(logger, red)
}
process.Run()
// Run reduction.
red.Reduce()
// Return the final reduced result.
fmt.Println(lambda.Stringify(compiled))
// For De Bruijn, convert back to lambda for consistent output.
var result string
if options.Interpreter == config.DeBruijnInterpreter {
dbExpr := red.Expression().(debruijn.Expression)
lambdaExpr := convert.DeBruijnToLambda(dbExpr)
result = lambdaExpr.String()
} else {
result = red.Expression().String()
}
err = options.Destination.Write(result)
cli.HandleError(err)
}

163
cmd/lambda/lambda_test.go Normal file
View File

@@ -0,0 +1,163 @@
package main
import (
"os"
"path/filepath"
"strings"
"testing"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/saccharine"
"github.com/stretchr/testify/assert"
)
// Helper function to run a single sample through the lambda interpreter.
func runSample(samplePath string) (string, error) {
// Read the sample file.
input, err := os.ReadFile(samplePath)
if err != nil {
return "", err
}
// Parse code into syntax tree.
ast, err := saccharine.Parse(string(input))
if err != nil {
return "", err
}
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
// Create and run the reducer.
reducer := lambda.NewNormalOrderReducer(&compiled)
reducer.Reduce()
return reducer.Expression().String() + "\n", nil
}
// Helper function to run a single sample through the De Bruijn interpreter.
func runSampleDeBruijn(samplePath string) (string, error) {
// Read the sample file.
input, err := os.ReadFile(samplePath)
if err != nil {
return "", err
}
// Parse code into syntax tree.
ast, err := saccharine.Parse(string(input))
if err != nil {
return "", err
}
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
// Convert to De Bruijn and run reducer.
dbExpr := convert.LambdaToDeBruijn(compiled)
reducer := debruijn.NewNormalOrderReducer(&dbExpr)
reducer.Reduce()
// Convert back to lambda for output.
result := reducer.Expression().(debruijn.Expression)
lambdaResult := convert.DeBruijnToLambda(result)
return lambdaResult.String() + "\n", nil
}
// Test that all samples produce expected output with lambda interpreter.
func TestSamplesValidity(t *testing.T) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(t, err, "Failed to read tests directory.")
assert.NotEmpty(t, testFiles, "No '*.test' files found in directory.")
for _, testPath := range testFiles {
// Build expected file path.
expectedPath := strings.TrimSuffix(testPath, filepath.Ext(testPath)) + ".expected"
name := strings.TrimSuffix(filepath.Base(testPath), filepath.Ext(testPath))
t.Run(name, func(t *testing.T) {
// Run the sample and capture output.
actual, err := runSample(testPath)
assert.NoError(t, err, "Failed to run sample.")
// Read expected output.
expectedBytes, err := os.ReadFile(expectedPath)
assert.NoError(t, err, "Failed to read expected output.")
expected := string(expectedBytes)
// Compare outputs.
assert.Equal(t, expected, actual, "Output does not match expected.")
})
}
}
// Test that all samples produce expected output with De Bruijn interpreter.
func TestSamplesValidityDeBruijn(t *testing.T) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(t, err, "Failed to read tests directory.")
assert.NotEmpty(t, testFiles, "No '*.test' files found in directory.")
for _, testPath := range testFiles {
// Build expected file path.
expectedPath := strings.TrimSuffix(testPath, filepath.Ext(testPath)) + ".expected"
name := strings.TrimSuffix(filepath.Base(testPath), filepath.Ext(testPath))
t.Run(name, func(t *testing.T) {
// Run the sample and capture output.
actual, err := runSampleDeBruijn(testPath)
assert.NoError(t, err, "Failed to run sample.")
// Read expected output.
expectedBytes, err := os.ReadFile(expectedPath)
assert.NoError(t, err, "Failed to read expected output.")
expected := string(expectedBytes)
// Compare outputs.
assert.Equal(t, expected, actual, "Output does not match expected.")
})
}
}
// Benchmark all samples using sub-benchmarks.
func BenchmarkSamples(b *testing.B) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(b, err, "Failed to read tests directory.")
assert.NotEmpty(b, testFiles, "No '*.test' files found in directory.")
for _, path := range testFiles {
name := strings.TrimSuffix(filepath.Base(path), filepath.Ext(path))
b.Run(name, func(b *testing.B) {
for b.Loop() {
_, err := runSample(path)
assert.NoError(b, err, "Failed to run sample.")
}
})
}
}
// Benchmark all samples using De Bruijn interpreter.
func BenchmarkSamplesDeBruijn(b *testing.B) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(b, err, "Failed to read tests directory.")
assert.NotEmpty(b, testFiles, "No '*.test' files found in directory.")
for _, path := range testFiles {
name := strings.TrimSuffix(filepath.Base(path), filepath.Ext(path))
b.Run(name, func(b *testing.B) {
for b.Loop() {
_, err := runSampleDeBruijn(path)
assert.NoError(b, err, "Failed to run sample.")
}
})
}
}

8
go.mod
View File

@@ -1,3 +1,11 @@
module git.maximhutz.com/max/lambda
go 1.25.5
require github.com/stretchr/testify v1.11.1
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

9
go.sum Normal file
View File

@@ -0,0 +1,9 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,11 +1,21 @@
// Package "config" parses ad handles the user settings given to the program.
package config
// Interpreter specifies the reduction engine to use.
type Interpreter string
const (
LambdaInterpreter Interpreter = "lambda"
DeBruijnInterpreter Interpreter = "debruijn"
)
// Configuration settings for the program.
type Config struct {
Source Source // The source code given to the program.
Destination Destination // The destination for the final result.
Verbose bool // Whether or not to print debug logs.
Explanation bool // Whether or not to print an explanation of the reduction.
Profile string // If not nil, print a CPU profile during execution.
Statistics bool // Whether or not to print statistics.
Interpreter Interpreter // The interpreter engine to use.
}

View File

@@ -0,0 +1,27 @@
package config
import (
"fmt"
"os"
)
// A method of writing output to the user.
type Destination interface {
// Write data to this destination.
Write(data string) error
}
// A destination writing to stdout.
type StdoutDestination struct{}
func (d StdoutDestination) Write(data string) error {
fmt.Println(data)
return nil
}
// A destination writing to a file.
type FileDestination struct{ Path string }
func (d FileDestination) Write(data string) error {
return os.WriteFile(d.Path, []byte(data+"\n"), 0644)
}

View File

@@ -12,28 +12,58 @@ func FromArgs() (*Config, error) {
explanation := flag.Bool("x", false, "Explanation. Whether or not to show all reduction steps.")
statistics := flag.Bool("s", false, "Statistics. If set, the process will print various statistics about the run.")
profile := flag.String("p", "", "CPU profiling. If an output file is defined, the program will profile its execution and dump its results into it.")
file := flag.String("f", "", "File. If set, read source from the specified file.")
output := flag.String("o", "", "Output. If set, write result to the specified file. Use '-' for stdout (default).")
interpreter := flag.String("i", "lambda", "Interpreter. The reduction engine to use: 'lambda' or 'debruijn'.")
flag.Parse()
// There must only be one input argument.
if flag.NArg() == 0 {
return nil, fmt.Errorf("no input given")
} else if flag.NArg() > 1 {
return nil, fmt.Errorf("more than 1 command-line argument")
// Validate interpreter flag.
var interpType Interpreter
switch *interpreter {
case "lambda":
interpType = LambdaInterpreter
case "debruijn":
interpType = DeBruijnInterpreter
default:
return nil, fmt.Errorf("invalid interpreter: %s (must be 'lambda' or 'debruijn')", *interpreter)
}
// Parse source type.
var source Source
if *file != "" {
// File flag takes precedence.
if flag.NArg() > 0 {
return nil, fmt.Errorf("cannot specify both -f flag and positional argument")
}
source = FileSource{Path: *file}
} else if flag.NArg() == 0 {
return nil, fmt.Errorf("no input given")
} else if flag.NArg() > 1 {
return nil, fmt.Errorf("more than 1 command-line argument")
} else {
// Positional argument.
if flag.Arg(0) == "-" {
source = StdinSource{}
} else {
source = StringSource{Data: flag.Arg(0)}
}
}
// Parse destination type.
var destination Destination
if *output == "" || *output == "-" {
destination = StdoutDestination{}
} else {
destination = FileDestination{Path: *output}
}
return &Config{
Source: source,
Destination: destination,
Verbose: *verbose,
Explanation: *explanation,
Profile: *profile,
Statistics: *statistics,
Interpreter: interpType,
}, nil
}

View File

@@ -27,3 +27,15 @@ func (s StdinSource) Extract() (string, error) {
return string(data), nil
}
// A source reading from a file.
type FileSource struct{ Path string }
func (s FileSource) Extract() (string, error) {
data, err := os.ReadFile(s.Path)
if err != nil {
return "", err
}
return string(data), nil
}

View File

@@ -1,32 +0,0 @@
// Package "engine" provides an extensible interface for users to interfact with
// λ-calculus.
package engine
import (
"git.maximhutz.com/max/lambda/internal/config"
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// A process for reducing one λ-expression.
type Engine struct {
Config *config.Config
Expression *lambda.Expression
emitter.Emitter
}
// Create a new engine, given an unreduced λ-expression.
func New(config *config.Config, expression *lambda.Expression) *Engine {
return &Engine{Config: config, Expression: expression}
}
// Begin the reduction process.
func (e Engine) Run() {
e.Emit("start")
for lambda.ReduceOnce(e.Expression) {
e.Emit("step")
}
e.Emit("end")
}

View File

@@ -1,32 +0,0 @@
// Package "explanation" provides a observer to gather the reasoning during the
// reduction, and present a thorough explanation to the user for each step.
package explanation
import (
"fmt"
"git.maximhutz.com/max/lambda/internal/engine"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// Track the reductions made by a reduction proess.
type Tracker struct {
process *engine.Engine
}
// Attaches a new explanation tracker to a process.
func Track(process *engine.Engine) *Tracker {
tracker := &Tracker{process: process}
process.On("start", tracker.Start)
process.On("step", tracker.Step)
return tracker
}
func (t *Tracker) Start() {
fmt.Println(lambda.Stringify(*t.process.Expression))
}
func (t *Tracker) Step() {
fmt.Println(" =", lambda.Stringify(*t.process.Expression))
}

23
internal/plugins/debug.go Normal file
View File

@@ -0,0 +1,23 @@
package plugins
import (
"log/slog"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
type Logs struct {
logger *slog.Logger
reducer reducer.Reducer
}
func NewLogs(logger *slog.Logger, r reducer.Reducer) *Logs {
plugin := &Logs{logger, r}
r.On(reducer.StepEvent, plugin.Step)
return plugin
}
func (t *Logs) Step() {
t.logger.Info("reduction", "tree", t.reducer.Expression().String())
}

View File

@@ -0,0 +1,31 @@
// Package "explanation" provides an observer to gather the reasoning during the
// reduction, and present a thorough explanation to the user for each step.
package plugins
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// Track the reductions made by a reduction process.
type Explanation struct {
reducer reducer.Reducer
}
// Attaches a new explanation tracker to a reducer.
func NewExplanation(r reducer.Reducer) *Explanation {
plugin := &Explanation{reducer: r}
r.On(reducer.StartEvent, plugin.Start)
r.On(reducer.StepEvent, plugin.Step)
return plugin
}
func (t *Explanation) Start() {
fmt.Println(t.reducer.Expression().String())
}
func (t *Explanation) Step() {
fmt.Println(" =", t.reducer.Expression().String())
}

View File

@@ -1,28 +1,34 @@
// Package "performance" provides a tracker to observer CPU performance during
// execution.
package performance
package plugins
import (
"os"
"path/filepath"
"runtime/pprof"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// Observes a reduction process, and publishes a CPU performance profile on
// completion.
type Tracker struct {
type Performance struct {
File string
filePointer *os.File
Error error
}
// Create a performance tracker that outputs a profile to "file".
func Track(file string) *Tracker {
return &Tracker{File: file}
func NewPerformance(file string, process reducer.Reducer) *Performance {
plugin := &Performance{File: file}
process.On(reducer.StartEvent, plugin.Start)
process.On(reducer.StopEvent, plugin.Stop)
return plugin
}
// Begin profiling.
func (t *Tracker) Start() {
func (t *Performance) Start() {
var absPath string
absPath, t.Error = filepath.Abs(t.File)
@@ -47,7 +53,7 @@ func (t *Tracker) Start() {
}
// Stop profiling.
func (t *Tracker) End() {
func (t *Performance) Stop() {
pprof.StopCPUProfile()
t.filePointer.Close()
}

View File

@@ -0,0 +1,44 @@
package plugins
import (
"fmt"
"os"
"time"
"git.maximhutz.com/max/lambda/internal/statistics"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// An observer, to track reduction performance.
type Statistics struct {
start time.Time
steps uint64
}
// Create a new reduction performance Statistics.
func NewStatistics(r reducer.Reducer) *Statistics {
plugin := &Statistics{}
r.On(reducer.StartEvent, plugin.Start)
r.On(reducer.StepEvent, plugin.Step)
r.On(reducer.StopEvent, plugin.Stop)
return plugin
}
func (t *Statistics) Start() {
t.start = time.Now()
t.steps = 0
}
func (t *Statistics) Step() {
t.steps++
}
func (t *Statistics) Stop() {
results := statistics.Results{
StepsTaken: t.steps,
TimeElapsed: uint64(time.Since(t.start).Milliseconds()),
}
fmt.Fprint(os.Stderr, results.String())
}

View File

@@ -1,36 +0,0 @@
package statistics
import (
"fmt"
"os"
"time"
)
// An observer, to track reduction performance.
type Tracker struct {
start time.Time
steps uint64
}
// Create a new reduction performance tracker.
func Track() *Tracker {
return &Tracker{}
}
func (t *Tracker) Start() {
t.start = time.Now()
t.steps = 0
}
func (t *Tracker) Step() {
t.steps++
}
func (t *Tracker) End() {
results := Results{
StepsTaken: t.steps,
TimeElapsed: uint64(time.Since(t.start).Milliseconds()),
}
fmt.Fprint(os.Stderr, results.String())
}

View File

@@ -1,114 +0,0 @@
# Makefile Improvements
This document lists the issues identified in the original Makefile and the improvements that were implemented.
## Issues Fixed
### 1. Hardcoded `.exe` extension on Unix
**Problem**: `BINARY_NAME=lambda.exe` used a Windows extension on macOS/Linux systems.
**Solution**: Changed to `BINARY_NAME=lambda` since Unix executables don't use extensions.
**Commit**: `0d06fac` - fix: remove Windows .exe extension from binary name
---
### 2. Redundant `chmod +x`
**Problem**: The `chmod +x` command was unnecessary since `go build` already sets the executable bit.
**Solution**: Removed the redundant `chmod +x ${BINARY_NAME}` line.
**Commit**: `e0b0b92` - refactor: remove redundant chmod +x command
---
### 3. Missing `.PHONY` declarations
**Problem**: Without `.PHONY`, if files named `run`, `graph`, etc. existed, Make would skip those targets.
**Solution**: Added `.PHONY: help it run profile explain graph clean` declaration.
**Commit**: `e5ceeb2` - feat: add .PHONY declarations for all targets
---
### 4. No `clean` target
**Problem**: No standard way to remove build artifacts.
**Solution**: Added `clean` target to remove binary, output files, and profile directory:
```makefile
clean:
rm -f ${BINARY_NAME} program.out
rm -rf profile/
```
**Commit**: `7927df4` - feat: add clean target to remove build artifacts
---
### 5. Missing `help` or default target
**Problem**: Running `make` with no arguments was unclear about available targets.
**Solution**: Added `help` target documenting all available commands and their usage.
**Commit**: `24fdc1c` - feat: add help target to document available commands
---
### 6. Profile directory not created
**Problem**: The `profile` and `explain` targets wrote to `profile/cpu.prof` but never created the directory, causing failures on first run.
**Solution**: Added `mkdir -p profile` to both `profile` and `explain` targets.
**Commit**: `bb48d07` - fix: ensure profile directory exists before writing
---
### 7. No dependency check on `graph`
**Problem**: The `graph` target assumed `profile/cpu.prof` exists but didn't depend on `profile`.
**Solution**: Changed `graph:` to `graph: profile` to ensure profiling runs first.
**Commit**: `3158c35` - fix: add profile dependency to graph target
---
### 8. Verbose command prefixes
**Problem**: Every command used `@` prefix individually to suppress output, cluttering the file.
**Solution**: Added `.SILENT:` directive at the top and removed all `@` prefixes. Also moved `TEST` variable to top with other variables.
**Commit**: `8f70bfb` - refactor: use .SILENT directive instead of @ prefixes
---
## Additional Suggestions (Not Yet Implemented)
### 9. Missing `lambda` binary in `.gitignore`
**Issue**: The `.gitignore` has `*.exe` but not the actual `lambda` binary name. Since we removed the `.exe` extension, the binary won't be ignored.
**Recommendation**: Add `lambda` to `.gitignore`:
```
# Build artifacts
lambda
```
---
### 10. No explicit default target
**Issue**: While `help` is currently the first target (and thus default), it's not explicitly declared.
**Recommendation**: Add `.DEFAULT_GOAL = help` at the top for clarity:
```makefile
BINARY_NAME=lambda
TEST=simple
.DEFAULT_GOAL := help
.SILENT:
.PHONY: help it run profile explain graph clean
```
---
## Summary
The Makefile has been significantly improved with better organization, proper dependency management, directory creation, helpful documentation, and cleaner syntax. The remaining suggestions are minor quality-of-life improvements that can be addressed as needed.

View File

@@ -0,0 +1,82 @@
package convert
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/set"
)
// DeBruijnToLambda converts a De Bruijn indexed expression back to named lambda calculus.
func DeBruijnToLambda(expr debruijn.Expression) lambda.Expression {
return deBruijnToLambdaWithContext(expr, []string{})
}
func deBruijnToLambdaWithContext(expr debruijn.Expression, context []string) lambda.Expression {
switch e := expr.(type) {
case *debruijn.Variable:
index := e.Index()
if index < len(context) {
// Bound variable: look up name in context.
name := context[len(context)-1-index]
return lambda.NewVariable(name)
}
// Free variable: use the label if available.
if e.Label() != "" {
return lambda.NewVariable(e.Label())
}
// Generate a name for free variables without labels.
return lambda.NewVariable(fmt.Sprintf("free%d", index))
case *debruijn.Abstraction:
// Generate a fresh parameter name.
used := collectUsedNames(e.Body(), context)
paramName := generateFreshName(used)
newContext := append(context, paramName)
body := deBruijnToLambdaWithContext(e.Body(), newContext)
return lambda.NewAbstraction(paramName, body)
case *debruijn.Application:
abs := deBruijnToLambdaWithContext(e.Abstraction(), context)
arg := deBruijnToLambdaWithContext(e.Argument(), context)
return lambda.NewApplication(abs, arg)
default:
panic("unknown expression type")
}
}
// collectUsedNames gathers all variable labels used in an expression.
func collectUsedNames(expr debruijn.Expression, context []string) *set.Set[string] {
used := set.New[string]()
for _, name := range context {
used.Add(name)
}
collectUsedNamesHelper(expr, used)
return used
}
func collectUsedNamesHelper(expr debruijn.Expression, used *set.Set[string]) {
switch e := expr.(type) {
case *debruijn.Variable:
if e.Label() != "" {
used.Add(e.Label())
}
case *debruijn.Abstraction:
collectUsedNamesHelper(e.Body(), used)
case *debruijn.Application:
collectUsedNamesHelper(e.Abstraction(), used)
collectUsedNamesHelper(e.Argument(), used)
}
}
// generateFreshName creates a fresh variable name not in the used set.
func generateFreshName(used *set.Set[string]) string {
for i := 0; ; i++ {
name := fmt.Sprintf("_%d", i)
if !used.Has(name) {
return name
}
}
}

View File

@@ -0,0 +1,44 @@
package convert
import (
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// LambdaToDeBruijn converts a lambda calculus expression to De Bruijn indexed form.
// The context parameter tracks bound variables from outer abstractions.
func LambdaToDeBruijn(expr lambda.Expression) debruijn.Expression {
return lambdaToDeBruijnWithContext(expr, []string{})
}
func lambdaToDeBruijnWithContext(expr lambda.Expression, context []string) debruijn.Expression {
switch e := expr.(type) {
case *lambda.Variable:
name := e.Value()
// Search for the variable in the context (innermost to outermost).
for i := len(context) - 1; i >= 0; i-- {
if context[i] == name {
index := len(context) - 1 - i
return debruijn.NewVariable(index, name)
}
}
// Free variable: use a negative index to mark it.
// We encode free variables with index = len(context) + position.
// For simplicity, we use a large index that won't conflict.
return debruijn.NewVariable(len(context), name)
case *lambda.Abstraction:
// Add the parameter to the context.
newContext := append(context, e.Parameter())
body := lambdaToDeBruijnWithContext(e.Body(), newContext)
return debruijn.NewAbstraction(body)
case *lambda.Application:
abs := lambdaToDeBruijnWithContext(e.Abstraction(), context)
arg := lambdaToDeBruijnWithContext(e.Argument(), context)
return debruijn.NewApplication(abs, arg)
default:
panic("unknown expression type")
}
}

119
pkg/debruijn/expression.go Normal file
View File

@@ -0,0 +1,119 @@
// Package debruijn provides De Bruijn indexed lambda calculus expressions.
// De Bruijn indices eliminate the need for variable names by using numeric
// indices to refer to bound variables, avoiding capture issues during substitution.
package debruijn
import "git.maximhutz.com/max/lambda/pkg/expr"
// Expression is the interface for all De Bruijn indexed expression types.
// It embeds the general expr.Expression interface for cross-mode compatibility.
type Expression interface {
expr.Expression
Accept(Visitor)
}
/** ------------------------------------------------------------------------- */
// Abstraction represents a lambda abstraction without a named parameter.
// In De Bruijn notation, the parameter is implicit and referenced by index 0
// within the body.
type Abstraction struct {
body Expression
}
// Body returns the body of the abstraction.
func (a *Abstraction) Body() Expression {
return a.body
}
// Accept implements the Visitor pattern.
func (a *Abstraction) Accept(v Visitor) {
v.VisitAbstraction(a)
}
// String returns the De Bruijn notation string representation.
func (a *Abstraction) String() string {
return Stringify(a)
}
// NewAbstraction creates a new De Bruijn abstraction with the given body.
func NewAbstraction(body Expression) *Abstraction {
return &Abstraction{body: body}
}
/** ------------------------------------------------------------------------- */
// Application represents the application of one expression to another.
type Application struct {
abstraction Expression
argument Expression
}
// Abstraction returns the function expression being applied.
func (a *Application) Abstraction() Expression {
return a.abstraction
}
// Argument returns the argument expression.
func (a *Application) Argument() Expression {
return a.argument
}
// Accept implements the Visitor pattern.
func (a *Application) Accept(v Visitor) {
v.VisitApplication(a)
}
// String returns the De Bruijn notation string representation.
func (a *Application) String() string {
return Stringify(a)
}
// NewApplication creates a new application expression.
func NewApplication(abstraction Expression, argument Expression) *Application {
return &Application{abstraction: abstraction, argument: argument}
}
/** ------------------------------------------------------------------------- */
// Variable represents a De Bruijn indexed variable.
// The index indicates how many binders to skip to find the binding abstraction.
// The label is an optional hint for display purposes.
type Variable struct {
index int
label string
}
// Index returns the De Bruijn index.
func (v *Variable) Index() int {
return v.index
}
// Label returns the optional variable label.
func (v *Variable) Label() string {
return v.label
}
// Accept implements the Visitor pattern.
func (v *Variable) Accept(visitor Visitor) {
visitor.VisitVariable(v)
}
// String returns the De Bruijn notation string representation.
func (v *Variable) String() string {
return Stringify(v)
}
// NewVariable creates a new De Bruijn variable with the given index and label.
func NewVariable(index int, label string) *Variable {
return &Variable{index: index, label: label}
}
/** ------------------------------------------------------------------------- */
// Visitor interface for traversing De Bruijn expressions.
type Visitor interface {
VisitAbstraction(*Abstraction)
VisitApplication(*Application)
VisitVariable(*Variable)
}

76
pkg/debruijn/iterator.go Normal file
View File

@@ -0,0 +1,76 @@
package debruijn
// Iterator provides depth-first traversal of De Bruijn expressions.
type Iterator struct {
trace []*Expression
}
// NewIterator creates a new iterator starting at the given expression.
func NewIterator(expr *Expression) *Iterator {
return &Iterator{[]*Expression{expr}}
}
// Done returns true when the iterator has finished traversal.
func (i *Iterator) Done() bool {
return len(i.trace) == 0
}
// Current returns a pointer to the current expression.
func (i *Iterator) Current() *Expression {
if i.Done() {
return nil
}
return i.trace[len(i.trace)-1]
}
// Parent returns a pointer to the parent expression.
func (i *Iterator) Parent() *Expression {
if len(i.trace) < 2 {
return nil
}
return i.trace[len(i.trace)-2]
}
// Swap replaces the current expression with the given expression.
func (i *Iterator) Swap(with Expression) {
current := i.Current()
if current != nil {
*current = with
}
}
// Back moves the iterator back to the parent expression.
func (i *Iterator) Back() bool {
if i.Done() {
return false
}
i.trace = i.trace[:len(i.trace)-1]
return true
}
// Next advances the iterator to the next expression in leftmost-outermost order.
func (i *Iterator) Next() {
switch typed := (*i.Current()).(type) {
case *Abstraction:
i.trace = append(i.trace, &typed.body)
case *Application:
i.trace = append(i.trace, &typed.abstraction)
case *Variable:
for len(i.trace) > 1 {
if app, ok := (*i.Parent()).(*Application); ok {
if app.abstraction == *i.Current() {
i.Back()
i.trace = append(i.trace, &app.argument)
return
}
}
i.Back()
}
i.trace = []*Expression{}
}
}

66
pkg/debruijn/reducer.go Normal file
View File

@@ -0,0 +1,66 @@
package debruijn
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// NormalOrderReducer implements normal order (leftmost-outermost) reduction
// for De Bruijn indexed lambda calculus expressions.
type NormalOrderReducer struct {
emitter.BaseEmitter[reducer.Event]
expression *Expression
}
// NewNormalOrderReducer creates a new normal order reducer.
func NewNormalOrderReducer(expression *Expression) *NormalOrderReducer {
return &NormalOrderReducer{
BaseEmitter: *emitter.New[reducer.Event](),
expression: expression,
}
}
// Expression returns the current expression state.
func (r *NormalOrderReducer) Expression() expr.Expression {
return *r.expression
}
// isViable checks if an expression is a redex (reducible expression).
// A redex is an application of an abstraction to an argument.
func isViable(e *Expression) (*Abstraction, Expression, bool) {
if e == nil {
return nil, nil, false
} else if app, appOk := (*e).(*Application); !appOk {
return nil, nil, false
} else if fn, fnOk := app.abstraction.(*Abstraction); !fnOk {
return nil, nil, false
} else {
return fn, app.argument, true
}
}
// Reduce performs normal order reduction on a De Bruijn expression.
func (r *NormalOrderReducer) Reduce() {
r.Emit(reducer.StartEvent)
it := NewIterator(r.expression)
for !it.Done() {
if fn, arg, ok := isViable(it.Current()); !ok {
it.Next()
} else {
// Substitute arg for variable 0 in the body.
substituted := Substitute(fn.body, 0, Shift(arg, 1, 0))
// Shift down to account for the removed abstraction.
it.Swap(Shift(substituted, -1, 0))
r.Emit(reducer.StepEvent)
if _, _, ok := isViable(it.Parent()); ok {
it.Back()
}
}
}
r.Emit(reducer.StopEvent)
}

32
pkg/debruijn/shift.go Normal file
View File

@@ -0,0 +1,32 @@
package debruijn
// Shift increments all free variable indices in an expression by the given amount.
// A variable is free if its index is >= the cutoff (depth of nested abstractions).
// This is necessary when substituting an expression into a different binding context.
func Shift(expr Expression, amount int, cutoff int) Expression {
switch e := expr.(type) {
case *Variable:
if e.index >= cutoff {
return NewVariable(e.index+amount, e.label)
}
return e
case *Abstraction:
newBody := Shift(e.body, amount, cutoff+1)
if newBody == e.body {
return e
}
return NewAbstraction(newBody)
case *Application:
newAbs := Shift(e.abstraction, amount, cutoff)
newArg := Shift(e.argument, amount, cutoff)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

35
pkg/debruijn/stringify.go Normal file
View File

@@ -0,0 +1,35 @@
package debruijn
import (
"strconv"
"strings"
)
type stringifyVisitor struct {
builder strings.Builder
}
func (v *stringifyVisitor) VisitVariable(a *Variable) {
v.builder.WriteString(strconv.Itoa(a.index))
}
func (v *stringifyVisitor) VisitAbstraction(f *Abstraction) {
v.builder.WriteRune('\\')
v.builder.WriteRune('.')
f.body.Accept(v)
}
func (v *stringifyVisitor) VisitApplication(c *Application) {
v.builder.WriteRune('(')
c.abstraction.Accept(v)
v.builder.WriteRune(' ')
c.argument.Accept(v)
v.builder.WriteRune(')')
}
// Stringify converts a De Bruijn expression to its string representation.
func Stringify(e Expression) string {
b := &stringifyVisitor{builder: strings.Builder{}}
e.Accept(b)
return b.builder.String()
}

View File

@@ -0,0 +1,34 @@
package debruijn
// Substitute replaces the variable at the given index with the replacement expression.
// The replacement is shifted appropriately as we descend into nested abstractions.
func Substitute(expr Expression, index int, replacement Expression) Expression {
switch e := expr.(type) {
case *Variable:
if e.index == index {
return replacement
}
return e
case *Abstraction:
// When entering an abstraction, increment the target index and shift the
// replacement to account for the new binding context.
shiftedReplacement := Shift(replacement, 1, 0)
newBody := Substitute(e.body, index+1, shiftedReplacement)
if newBody == e.body {
return e
}
return NewAbstraction(newBody)
case *Application:
newAbs := Substitute(e.abstraction, index, replacement)
newArg := Substitute(e.argument, index, replacement)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

View File

@@ -1,5 +1,7 @@
package deltanet
/** ------------------------------------------------------------------------- */
// A connection between exactly two nodes in a graph.
type Edge struct {
A, B Node
@@ -26,6 +28,8 @@ func (e Edge) IsPrincipleEdge() bool {
return e.A.GetMainPort() == e && e.B.GetMainPort() == e
}
/** ------------------------------------------------------------------------- */
type Node interface {
// Returns the principle port that the node is attached to.
GetMainPort() Edge
@@ -38,6 +42,8 @@ type Node interface {
GetLabel() string
}
/** ------------------------------------------------------------------------- */
type EraserNode struct {
Main Edge
}
@@ -46,6 +52,8 @@ func (n EraserNode) GetLabel() string { return "Ⓧ" }
func (n EraserNode) GetMainPort() Edge { return n.Main }
func (n EraserNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */
type ReplicatorNode struct {
Main Edge
Level uint
@@ -60,6 +68,8 @@ func (n ReplicatorNode) GetAuxPorts() []Edge { return n.Aux }
// Returns the level of the replicator node.
func (n ReplicatorNode) GetLevel() uint { return n.Level }
/** ------------------------------------------------------------------------- */
type FanNode struct {
Label string
Main Edge
@@ -70,6 +80,8 @@ func (n FanNode) GetLabel() string { return n.Label }
func (n FanNode) GetMainPort() Edge { return n.Main }
func (n FanNode) GetAuxPorts() []Edge { return []Edge{n.Left, n.Right} }
/** ------------------------------------------------------------------------- */
type TerminalNode struct {
Label string
Main Edge
@@ -78,3 +90,5 @@ type TerminalNode struct {
func (n TerminalNode) GetLabel() string { return n.Label }
func (n TerminalNode) GetMainPort() Edge { return n.Main }
func (n TerminalNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */

View File

@@ -2,53 +2,45 @@ package emitter
import "git.maximhutz.com/max/lambda/pkg/set"
type Observer struct {
fn func()
message string
emitter *Emitter
type Emitter[E comparable] interface {
On(E, func()) Listener[E]
Off(Listener[E])
Emit(E)
}
type Emitter struct {
listeners map[string]*set.Set[*Observer]
type BaseEmitter[E comparable] struct {
listeners map[E]*set.Set[Listener[E]]
}
func Ignore[T any](fn func()) func(T) {
return func(T) { fn() }
func (e *BaseEmitter[E]) On(kind E, fn func()) Listener[E] {
if e.listeners[kind] == nil {
e.listeners[kind] = set.New[Listener[E]]()
}
func (e *Emitter) On(message string, fn func()) *Observer {
observer := &Observer{
fn: fn,
message: message,
emitter: e,
listener := &BaseListener[E]{kind, fn}
e.listeners[kind].Add(listener)
return listener
}
if e.listeners == nil {
e.listeners = map[string]*set.Set[*Observer]{}
func (e *BaseEmitter[E]) Off(listener Listener[E]) {
kind := listener.Kind()
if e.listeners[kind] != nil {
e.listeners[kind].Remove(listener)
}
}
if e.listeners[message] == nil {
e.listeners[message] = set.New[*Observer]()
func (e *BaseEmitter[E]) Emit(event E) {
if e.listeners[event] == nil {
e.listeners[event] = set.New[Listener[E]]()
}
e.listeners[message].Add(observer)
return observer
for listener := range e.listeners[event].Items() {
listener.Run()
}
}
func (o *Observer) Off() {
if o.emitter.listeners[o.message] == nil {
return
}
o.emitter.listeners[o.message].Remove(o)
}
func (e *Emitter) Emit(message string) {
if e.listeners[message] == nil {
return
}
for listener := range *e.listeners[message] {
listener.fn()
func New[E comparable]() *BaseEmitter[E] {
return &BaseEmitter[E]{
listeners: map[E]*set.Set[Listener[E]]{},
}
}

19
pkg/emitter/listener.go Normal file
View File

@@ -0,0 +1,19 @@
package emitter
type Listener[E comparable] interface {
Kind() E
Run()
}
type BaseListener[E comparable] struct {
kind E
fn func()
}
func (l BaseListener[E]) Kind() E {
return l.kind
}
func (l BaseListener[E]) Run() {
l.fn()
}

11
pkg/expr/expr.go Normal file
View File

@@ -0,0 +1,11 @@
// Package expr provides the abstract Expression interface for all evaluatable
// expression types in the lambda interpreter.
package expr
// Expression is the base interface for all evaluatable expression types.
// Different evaluation modes (lambda calculus, SKI combinators, typed lambda
// calculus, etc.) implement this interface with their own concrete types.
type Expression interface {
// String returns a human-readable representation of the expression.
String() string
}

View File

@@ -1,35 +0,0 @@
package fifo
import "fmt"
type FIFO[T any] []T
func New[T any](items ...T) *FIFO[T] {
f := FIFO[T](items)
return &f
}
func (f *FIFO[T]) Push(item T) {
*f = append(*f, item)
}
func (f *FIFO[T]) Empty() bool {
return len(*f) == 0
}
func (f *FIFO[T]) MustPop() T {
var item T
*f, item = (*f)[:len(*f)-1], (*f)[len(*f)-1]
return item
}
func (f *FIFO[T]) Pop() (T, error) {
var item T
if f.Empty() {
return item, fmt.Errorf("stack is exhausted")
}
return f.MustPop(), nil
}

View File

@@ -1,60 +1,92 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/expr"
// Expression is the interface for all lambda calculus expression types.
// It embeds the general expr.Expression interface for cross-mode compatibility.
type Expression interface {
expr.Expression
Accept(Visitor)
Copy() Expression
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
Parameter string
Body Expression
parameter string
body Expression
}
func (a *Abstraction) Copy() Expression {
return NewAbstraction(a.Parameter, a.Body.Copy())
func (a *Abstraction) Parameter() string {
return a.parameter
}
func (a *Abstraction) Body() Expression {
return a.body
}
func (a *Abstraction) Accept(v Visitor) {
v.VisitAbstraction(a)
}
func NewAbstraction(parameter string, body Expression) *Abstraction {
return &Abstraction{Parameter: parameter, Body: body}
func (a *Abstraction) String() string {
return Stringify(a)
}
func NewAbstraction(parameter string, body Expression) *Abstraction {
return &Abstraction{parameter: parameter, body: body}
}
/** ------------------------------------------------------------------------- */
type Application struct {
Abstraction Expression
Argument Expression
abstraction Expression
argument Expression
}
func (a *Application) Copy() Expression {
return NewApplication(a.Abstraction.Copy(), a.Argument.Copy())
func (a *Application) Abstraction() Expression {
return a.abstraction
}
func (a *Application) Argument() Expression {
return a.argument
}
func (a *Application) Accept(v Visitor) {
v.VisitApplication(a)
}
func NewApplication(function Expression, argument Expression) *Application {
return &Application{Abstraction: function, Argument: argument}
func (a *Application) String() string {
return Stringify(a)
}
func NewApplication(abstraction Expression, argument Expression) *Application {
return &Application{abstraction: abstraction, argument: argument}
}
/** ------------------------------------------------------------------------- */
type Variable struct {
Value string
value string
}
func (v *Variable) Copy() Expression {
return NewVariable(v.Value)
func (v *Variable) Value() string {
return v.value
}
func (v *Variable) Accept(visitor Visitor) {
visitor.VisitVariable(v)
}
func NewVariable(name string) *Variable {
return &Variable{Value: name}
func (v *Variable) String() string {
return Stringify(v)
}
func NewVariable(name string) *Variable {
return &Variable{value: name}
}
/** ------------------------------------------------------------------------- */
type Visitor interface {
VisitAbstraction(*Abstraction)
VisitApplication(*Application)

View File

@@ -5,14 +5,14 @@ import "git.maximhutz.com/max/lambda/pkg/set"
func GetFreeVariables(e Expression) *set.Set[string] {
switch e := e.(type) {
case *Variable:
return set.New(e.Value)
return set.New(e.value)
case *Abstraction:
vars := GetFreeVariables(e.Body)
vars.Remove(e.Parameter)
vars := GetFreeVariables(e.body)
vars.Remove(e.parameter)
return vars
case *Application:
vars := GetFreeVariables(e.Abstraction)
vars.Merge(GetFreeVariables(e.Argument))
vars := GetFreeVariables(e.abstraction)
vars.Merge(GetFreeVariables(e.argument))
return vars
default:
return nil

View File

@@ -3,11 +3,11 @@ package lambda
func IsFreeVariable(n string, e Expression) bool {
switch e := e.(type) {
case *Variable:
return e.Value == n
return e.value == n
case *Abstraction:
return e.Parameter != n && IsFreeVariable(n, e.Body)
return e.parameter != n && IsFreeVariable(n, e.body)
case *Application:
return IsFreeVariable(n, e.Abstraction) || IsFreeVariable(n, e.Argument)
return IsFreeVariable(n, e.abstraction) || IsFreeVariable(n, e.argument)
default:
return false
}

68
pkg/lambda/iterator.go Normal file
View File

@@ -0,0 +1,68 @@
package lambda
type Iterator struct {
trace []*Expression
}
func NewIterator(expr *Expression) *Iterator {
return &Iterator{[]*Expression{expr}}
}
func (i *Iterator) Done() bool {
return len(i.trace) == 0
}
func (i *Iterator) Current() *Expression {
if i.Done() {
return nil
}
return i.trace[len(i.trace)-1]
}
func (i *Iterator) Parent() *Expression {
if len(i.trace) < 2 {
return nil
}
return i.trace[len(i.trace)-2]
}
func (i *Iterator) Swap(with Expression) {
current := i.Current()
if current != nil {
*current = with
}
}
func (i *Iterator) Back() bool {
if i.Done() {
return false
}
i.trace = i.trace[:len(i.trace)-1]
return true
}
func (i *Iterator) Next() {
switch typed := (*i.Current()).(type) {
case *Abstraction:
i.trace = append(i.trace, &typed.body)
case *Application:
i.trace = append(i.trace, &typed.abstraction)
case *Variable:
for len(i.trace) > 1 {
if app, ok := (*i.Parent()).(*Application); ok {
if app.abstraction == *i.Current() {
i.Back()
i.trace = append(i.trace, &app.argument)
return
}
}
i.Back()
}
i.trace = []*Expression{}
}
}

View File

@@ -1,27 +0,0 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/fifo"
func ReduceOnce(e *Expression) bool {
stack := fifo.New(e)
for !stack.Empty() {
top := stack.MustPop()
switch typed := (*top).(type) {
case *Abstraction:
stack.Push(&typed.Body)
case *Application:
if fn, fnOk := typed.Abstraction.(*Abstraction); fnOk {
Substitute(&fn.Body, fn.Parameter, typed.Argument)
*top = fn.Body
return true
}
stack.Push(&typed.Argument)
stack.Push(&typed.Abstraction)
}
}
return false
}

61
pkg/lambda/reducer.go Normal file
View File

@@ -0,0 +1,61 @@
package lambda
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// NormalOrderReducer implements normal order (leftmost-outermost) reduction
// for lambda calculus expressions.
type NormalOrderReducer struct {
emitter.BaseEmitter[reducer.Event]
expression *Expression
}
// NewNormalOrderReducer creates a new normal order reducer.
func NewNormalOrderReducer(expression *Expression) *NormalOrderReducer {
return &NormalOrderReducer{
BaseEmitter: *emitter.New[reducer.Event](),
expression: expression,
}
}
// Expression returns the current expression state.
func (r *NormalOrderReducer) Expression() expr.Expression {
return *r.expression
}
func isViable(e *Expression) (*Abstraction, Expression, bool) {
if e == nil {
return nil, nil, false
} else if app, appOk := (*e).(*Application); !appOk {
return nil, nil, false
} else if fn, fnOk := app.abstraction.(*Abstraction); !fnOk {
return nil, nil, false
} else {
return fn, app.argument, true
}
}
// Reduce performs normal order reduction on a lambda expression.
// The expression must be a lambda.Expression; other types are returned unchanged.
func (r *NormalOrderReducer) Reduce() {
r.Emit(reducer.StartEvent)
it := NewIterator(r.expression)
for !it.Done() {
if fn, arg, ok := isViable(it.Current()); !ok {
it.Next()
} else {
it.Swap(Substitute(fn.body, fn.parameter, arg))
r.Emit(reducer.StepEvent)
if _, _, ok := isViable(it.Parent()); ok {
it.Back()
}
}
}
r.Emit(reducer.StopEvent)
}

View File

@@ -1,19 +1,38 @@
package lambda
func Rename(e Expression, target string, substitute string) {
switch e := e.(type) {
func Rename(expr Expression, target string, newName string) Expression {
switch e := expr.(type) {
case *Variable:
if e.Value == target {
e.Value = substitute
if e.value == target {
return NewVariable(newName)
}
return e
case *Abstraction:
if e.Parameter == target {
e.Parameter = substitute
newParam := e.parameter
if e.parameter == target {
newParam = newName
}
Rename(e.Body, target, substitute)
newBody := Rename(e.body, target, newName)
if newParam == e.parameter && newBody == e.body {
return e
}
return NewAbstraction(newParam, newBody)
case *Application:
Rename(e.Abstraction, target, substitute)
Rename(e.Argument, target, substitute)
newAbs := Rename(e.abstraction, target, newName)
newArg := Rename(e.argument, target, newName)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

View File

@@ -7,21 +7,21 @@ type stringifyVisitor struct {
}
func (v *stringifyVisitor) VisitVariable(a *Variable) {
v.builder.WriteString(a.Value)
v.builder.WriteString(a.value)
}
func (v *stringifyVisitor) VisitAbstraction(f *Abstraction) {
v.builder.WriteRune('\\')
v.builder.WriteString(f.Parameter)
v.builder.WriteString(f.parameter)
v.builder.WriteRune('.')
f.Body.Accept(v)
f.body.Accept(v)
}
func (v *stringifyVisitor) VisitApplication(c *Application) {
v.builder.WriteRune('(')
c.Abstraction.Accept(v)
c.abstraction.Accept(v)
v.builder.WriteRune(' ')
c.Argument.Accept(v)
c.argument.Accept(v)
v.builder.WriteRune(')')
}

View File

@@ -1,27 +1,46 @@
package lambda
func Substitute(e *Expression, target string, replacement Expression) {
switch typed := (*e).(type) {
func Substitute(expr Expression, target string, replacement Expression) Expression {
switch e := expr.(type) {
case *Variable:
if typed.Value == target {
*e = replacement.Copy()
if e.value == target {
return replacement
}
return e
case *Abstraction:
if typed.Parameter == target {
return
if e.parameter == target {
return e
}
if IsFreeVariable(typed.Parameter, replacement) {
replacementFreeVars := GetFreeVariables(replacement)
used := GetFreeVariables(typed.Body)
used.Merge(replacementFreeVars)
freshVar := GenerateFreshName(used)
Rename(typed, typed.Parameter, freshVar)
body := e.body
param := e.parameter
if IsFreeVariable(param, replacement) {
freeVars := GetFreeVariables(replacement)
freeVars.Merge(GetFreeVariables(body))
freshVar := GenerateFreshName(freeVars)
body = Rename(body, param, freshVar)
param = freshVar
}
Substitute(&typed.Body, target, replacement)
newBody := Substitute(body, target, replacement)
if newBody == body && param == e.parameter {
return e
}
return NewAbstraction(param, newBody)
case *Application:
Substitute(&typed.Abstraction, target, replacement)
Substitute(&typed.Argument, target, replacement)
newAbs := Substitute(e.abstraction, target, replacement)
newArg := Substitute(e.argument, target, replacement)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

13
pkg/reducer/events.go Normal file
View File

@@ -0,0 +1,13 @@
package reducer
// Event represents lifecycle events during reduction.
type Event int
const (
// StartEvent is emitted before reduction begins.
StartEvent Event = iota
// StepEvent is emitted after each reduction step.
StepEvent
// StopEvent is emitted after reduction completes.
StopEvent
)

27
pkg/reducer/reducer.go Normal file
View File

@@ -0,0 +1,27 @@
// Package reducer provides the abstract Reducer interface for all expression
// reduction strategies.
package reducer
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
)
// Reducer defines the interface for expression reduction strategies.
// Different evaluation modes (normal order, applicative order, SKI combinators,
// etc.) implement this interface with their own reduction logic.
//
// Reducers also implement the Emitter interface to allow plugins to observe
// reduction lifecycle events (Start, Step, Stop).
type Reducer interface {
emitter.Emitter[Event]
// Reduce performs all reduction steps on the expression.
// Emits StartEvent before reduction, StepEvent after each step, and
// StopEvent after completion.
// Returns the final reduced expression.
Reduce()
// Expression returns the current expression state.
Expression() expr.Expression
}

View File

@@ -4,6 +4,8 @@ type Expression interface {
IsExpression()
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
Parameters []string
Body Expression
@@ -28,6 +30,8 @@ func (Application) IsExpression() {}
func (Atom) IsExpression() {}
func (Clause) IsExpression() {}
/** ------------------------------------------------------------------------- */
func NewAbstraction(parameter []string, body Expression) *Abstraction {
return &Abstraction{Parameters: parameter, Body: body}
}

View File

@@ -4,6 +4,8 @@ type Statement interface {
IsStatement()
}
/** ------------------------------------------------------------------------- */
type LetStatement struct {
Name string
Parameters []string
@@ -17,6 +19,8 @@ type DeclareStatement struct {
func (LetStatement) IsStatement() {}
func (DeclareStatement) IsStatement() {}
/** ------------------------------------------------------------------------- */
func NewLet(name string, parameters []string, body Expression) *LetStatement {
return &LetStatement{Name: name, Parameters: parameters, Body: body}
}

View File

@@ -77,6 +77,21 @@ func getToken(i *iterator.Iterator[rune]) (*Token, error) {
}
case letter == ';':
return NewHardBreak(index), nil
case letter == '#':
// Skip everything until the next newline or EOF.
for !i.Done() {
r, err := i.Next()
if err != nil {
return nil, trace.Wrap(err, "error while parsing comment")
}
if r == '\n' {
// Put the newline back so it can be processed as a soft break.
i.Back()
break
}
}
return nil, nil
case unicode.IsSpace(letter):
return nil, nil
case isVariable(letter):

View File

@@ -1,5 +1,7 @@
package set
import "iter"
type Set[T comparable] map[T]bool
func (s *Set[T]) Add(items ...T) {
@@ -34,6 +36,16 @@ func (s Set[T]) ToList() []T {
return list
}
func (s Set[T]) Items() iter.Seq[T] {
return func(yield func(T) bool) {
for item := range s {
if !yield(item) {
return
}
}
}
}
func New[T comparable](items ...T) *Set[T] {
result := &Set[T]{}

View File

@@ -1,16 +0,0 @@
(\0.
(\inc.
(\add.
(\mult.
(\exp.
(exp (inc (inc (inc (inc 0)))) (inc (inc (inc (inc (inc 0))))))
\n m.(m n)
)
\m n f.(m (n f))
)
\n m.(m inc n)
)
\n f x.(f (n f x))
)
\f x.x
)

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,8 @@
0 := \f.\x.x
inc n := \f x.(f (n f x))
exp n m := (m n)
print n := (n F X)
N := (inc (inc (inc (inc (inc 0)))))
(exp N N)
(print (exp N N))

File diff suppressed because one or more lines are too long

8
tests/church_6^6.test Normal file
View File

@@ -0,0 +1,8 @@
0 := \f.\x.x
inc n := \f x.(f (n f x))
exp n m := (m n)
print n := (n F X)
N := (inc (inc (inc (inc (inc (inc 0))))))
(print (exp N N))

1
tests/comments.expected Normal file
View File

@@ -0,0 +1 @@
VALUE

17
tests/comments.test Normal file
View File

@@ -0,0 +1,17 @@
# This is a full-line comment at the start
# The following defines the identity function
identity := \x.x # This is an end-of-line comment
# Define a simple function that applies a function twice
twice := \f.\x.(f
# Comments can be anywhere!
(f x))
# Test that comments don't interfere with expressions
result := (twice identity VALUE) # Should just return VALUE
# Multiple comments in a row
# can appear anywhere
# without breaking the code
result # Final comment at the end

View File

@@ -0,0 +1 @@
(0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (1 END)))))))))))))))))))))))))))))))

1
tests/list_2^30.expected Normal file
View File

@@ -0,0 +1 @@
(0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (1 END)))))))))))))))))))))))))))))))

1
tests/thunk.expected Normal file
View File

@@ -0,0 +1 @@
VALUE