35 Commits

Author SHA1 Message Date
76ea6ea2cb feat: functional options pattern 2026-02-11 21:08:57 -05:00
3b7cf21eb7 feat: undo 2026-02-11 20:54:05 -05:00
b3f9f08c62 feat: scanner added 2026-02-11 20:28:29 -05:00
aca197ef51 refactor: simplify iterator.Try and remove unnecessary backtracking (#47)
## Description

`iterator.Try` previously copied the entire iterator and synced it back on success, causing an unnecessary heap allocation on every call.
This PR simplifies `Try` to save and restore the index directly, and removes the now-unused `Copy` and `Sync` methods.

- Rewrite `ScanRune` and `ParseRawToken` as peek-then-advance, eliminating the need for `Try` at leaf level.
- Remove redundant `Try` wrappers from `parseExpression`, `parseAbstraction`, `parseApplication`, `parseLet`, and `parseToken`, which are already disambiguated by their callers.
- Keep `Try` only where true backtracking is needed: `parseStatement`, which must choose between `parseLet` and `parseDeclare`.
- Fix pre-existing panic in saccharine `parseExpression` when the iterator is exhausted (added `Done()` guard).

### Decisions

- `Try` now operates on the original iterator instead of a copy, removing the confusing pattern where the callback's `i` was a different object than the caller's `i`.
- Removed `parseSoftBreak` and `parseHardBreak` helper functions since `ParseRawToken` no longer needs `Try` wrapping.

## Benefits

- Eliminates a heap allocation per `Try` call.
- Reduces nesting and indirection in all parse functions.
- Makes the code easier to follow by removing the shadow-`i` pattern.
- `Try` is now only used at genuine choice points in the grammar.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #47
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-12 01:04:26 +00:00
da3da70855 refactor: extract shared token package (#46)
## Description

Both the `saccharine` and `lambda` packages need tokenizing and parsing primitives.
This PR extracts shared token infrastructure into a new `pkg/token` package, then wires both languages up to use it.

- Add `pkg/token` with a generic `Token[T]` type, `Scan`, `ScanAtom`, `ScanRune`, `ScanCharacter`, `IsVariable`, `ParseRawToken`, and `ParseList`.
- Refactor `pkg/saccharine` to delegate to `pkg/token`, removing duplicated scanning and parsing helpers.
- Implement `Codec.Decode` for `pkg/lambda` (scanner + parser) using the shared token package.
- Add `iterator.While` for predicate-driven iteration.
- Rename `iterator.Do` to `iterator.Try` to better describe its rollback semantics.

### Decisions

- The `Type` constraint (`comparable` + `Name() string`) keeps the generic token flexible while ensuring every token type can produce readable error messages.
- `iterator.Do` was renamed to `iterator.Try` since it describes a try/rollback operation, not a side-effecting "do".

## Benefits

- Eliminates duplicated token, scanning, and parsing code between languages.
- Enables the `lambda` package to decode (parse) lambda calculus strings, which was previously unimplemented.
- Makes it straightforward to add new languages by reusing `pkg/token` primitives.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #46
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-12 00:25:18 +00:00
361f529bdc docs: document remaining packages and simplify AST types (#45)
## Summary

- Added doc comments across the codebase: `pkg/lambda`, `pkg/saccharine`, `pkg/codec`, `pkg/engine`, `pkg/iterator`, `pkg/set`, `pkg/convert`, `internal/registry`, and `cmd/lambda`.
- Made lambda and saccharine expression structs use public fields instead of getters, matching `go/ast` conventions.
- Removed superfluous constructors for saccharine and lambda expression/statement types in favor of struct literals.
- Consolidated saccharine token constructors into a single `NewToken` function.
- Removed the unused `trace` package.

## Test plan

- [x] `go build ./...` passes.
- [x] `go test ./...` passes.
- [ ] Verify `go doc` output renders correctly for documented packages.

Reviewed-on: #45
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-10 01:15:41 +00:00
1f486875fd style: rename repr to expr (#44)
## Description

The `Repr` type name was unclear — it was intended to represent a lambda calculus expression, not a "representation."
This PR renames `Repr` to `Expr` throughout the registry package for clarity.

- Rename `Repr` interface to `Expr` and `baseRepr` struct to `baseExpr`.
- Rename `repr.go` to `expr.go`.
- Rename `ID()` method to `Repr()` to indicate the representation type.
- Rename `NewRepr` constructor to `NewExpr`.
- Update all usages in codec, conversion, engine, process, and registry files.
- Add command aliases `conv` and `eng` for `convert` and `engine` subcommands.

## Benefits

- The naming better reflects the domain: an `Expr` is an expression, and `Repr()` returns its representation kind.
- Command aliases reduce typing for common subcommands.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #44
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 15:26:50 +00:00
bbe027e9f4 style: restructure cli and registry packages (#43)
## Description

The `internal/cli` package had grown to contain both CLI utilities (source/destination I/O) and registry-level abstractions (repr, conversion, engine, marshaler).
This PR separates concerns by moving registry types into `internal/registry` and keeping only CLI I/O types in `internal/cli`.
It also simplifies several core abstractions and aligns naming conventions.

- Move `Source`, `Destination` from `internal/config` to `internal/cli`.
- Move `Repr`, `Conversion`, `Engine`, `Process`, `Codec` from `internal/cli` to `internal/registry`.
- Rename "marshalers" to "codecs" throughout the codebase.
- Simplify `codec.Codec[T, U]` to `codec.Codec[T]` (string-based marshaling only).
- Add `codec.Conversion[T, U]` as a function type alias.
- Change `engine.Engine[T]` from an interface to a function type.
- Merge `Engine.Load()` + `Process.Set()` into a single `Engine.Load(Repr)` call.
- Convert `Saccharine2Lambda` from a struct to standalone conversion functions.
- Replace registry methods (`MustAddMarshaler`, `MustAddEngine`, `MustAddConversions`) with generic free functions (`RegisterCodec`, `RegisterEngine`, `RegisterConversion`).
- Remove unused `internal/config` package (`Config`, `GetLogger`, `ParseFromArgs`).
- Remove unused `pkg/emitter` package.
- Rename `Id()` to `ID()` per Go conventions.
- Add documentation comments and enable `checkPublicInterface` lint rule.
- Rename `reduce_one.go` to `reduce_once.go`.

### Decisions

- `Engine[T]` is now a function type (`func(T) (Process[T], error)`) rather than an interface, since the only method was `Load`.
- `Codec[T, U]` was split into `Codec[T]` (string marshaling) and `Conversion[T, U]` (type-to-type conversion function), which better reflects how they are actually used.
- Registration uses free generic functions (`RegisterCodec`, `RegisterEngine`, `RegisterConversion`) instead of methods on `Registry`, enabling type inference at the call site.

## Benefits

- Clearer separation of concerns between CLI I/O and the registry's internal type system.
- Simpler abstractions: fewer interfaces, fewer wrapper types, fewer indirections.
- Removing unused packages (`config`, `emitter`) reduces maintenance burden.
- Naming conventions (`ID`, codecs, `reduce_once`) are more idiomatic.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #43
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 05:39:32 +00:00
58d0823069 feat: rename --from/--to flags to --input/--output (#42)
## Description

The `convert` and `reduce` commands used `--from` and `--to` flags to specify input/output representations.
These names are ambiguous and don't clearly describe what they control.
This PR renames them to `--input`/`--output` and adds `-i`/`-o` short aliases for convenience.

- Rename `--from` to `--input` (`-i`) in `convert` and `reduce` commands.
- Rename `--to` to `--output` (`-o`) in the `convert` command.
- Switch from `StringVar` to `StringVarP` to support the new short flags.

## Benefits

- Flag names now clearly indicate they refer to representations, not file paths.
- Short aliases `-i` and `-o` make CLI usage more concise.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #42
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 04:31:57 +00:00
a3ee34732e refactor: rewrite CLI and internal architecture (#41)
## Description

The old architecture used a monolithic `main()` with a custom arg parser, an event-emitter-based runtime, and a plugin system for optional features.
This PR rewrites the CLI and internal architecture to be modular, extensible, and built around a registry of interchangeable components.

- Replace custom CLI arg parsing with Cobra subcommands (`convert`, `reduce`, `engine list`).
- Introduce a registry system (`internal/registry`) for marshalers, codecs, and engines, with BFS-based conversion path resolution.
- Add type-erased adapter layer (`internal/cli`) with `Repr`, `Engine`, `Process`, `Marshaler`, and `Conversion` interfaces wrapping generic `pkg/` types.
- Replace the event-emitter-based `Runtime` with a simpler `Engine`/`Process` model (`pkg/engine`).
- Add generic `Codec[T, U]` and `Marshaler[T]` interfaces (`pkg/codec`).
- Merge `saccharine/token` sub-package into `saccharine` and rename scanner functions from `parse*` to `scan*`.
- Make saccharine-to-lambda conversion bidirectional (encode and decode).
- Add `lambda.Marshaler` and `saccharine.Marshaler` implementing `codec.Marshaler`.
- Remove old infrastructure: `pkg/runtime`, `pkg/expr`, `internal/plugins`, `internal/statistics`.
- Add `make lint` target and update golangci-lint config.

### Decisions

- Cobra was chosen for the CLI framework to support nested subcommands and standard flag handling.
- The registry uses BFS to find conversion paths between representations, allowing multi-hop conversions without hardcoding routes.
- Type erasure via `cli.Repr` (wrapping `any`) enables the registry to work with heterogeneous types while keeping `pkg/` generics type-safe.
- The old plugin/event system was removed entirely rather than adapted, since the new `Process` model can support hooks differently in the future.

## Benefits

- Subcommands make the CLI self-documenting and easier to extend with new functionality.
- The registry pattern decouples representations, conversions, and engines, making it trivial to add new ones.
- BFS conversion routing means adding a single codec automatically enables transitive conversions.
- Simpler `Engine`/`Process` model reduces complexity compared to the event-emitter runtime.
- Consolidating the `token` sub-package reduces import depth and package sprawl.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #41
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 03:25:32 +00:00
f2c8d9f7d2 fix: use loop variable instead of global ticker in GenerateFreshName (#40)
## Description

`GenerateFreshName` used a global `ticker` variable but never incremented it inside the loop.
This caused an infinite loop if the first generated name (`_0`) was already in the used set.

- Remove global `ticker` variable.
- Use loop variable `i` directly to generate candidate names.

## Benefits

- Fixes infinite loop bug when generated name collides with used set.
- Removes unnecessary global state.
- Simpler and more predictable behavior.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #40
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-18 20:58:23 +00:00
9c7fb8ceba refactor: rename interpreter to runtime and use receiver methods (#39)
## Description

The codebase previously used "interpreter" terminology and standalone functions for expression operations.
This PR modernizes the architecture by renaming to "runtime" and converting operations to receiver methods.

- Rename `pkg/interpreter` to `pkg/runtime`.
- Move `ReduceOnce` to new `pkg/normalorder` package for reduction strategy isolation.
- Convert standalone functions (`Substitute`, `Rename`, `GetFree`, `IsFree`) to receiver methods on concrete expression types.
- Change `Set` from pointer receivers to value receivers for simpler usage.
- Update all references from "interpreter" to "runtime" terminology throughout the codebase.

### Decisions

- Operations like `Substitute`, `Rename`, `GetFree`, and `IsFree` are now methods on the `Expression` interface, implemented by each concrete type (`Variable`, `Abstraction`, `Application`).
- The `normalorder` package isolates the normal-order reduction strategy, allowing future reduction strategies to be added in separate packages.
- `Set` uses value receivers since Go maps are reference types and don't require pointer semantics.

## Benefits

- Cleaner API: `expr.Substitute(target, replacement)` instead of `Substitute(expr, target, replacement)`.
- Better separation of concerns: reduction strategies are isolated from expression types.
- Consistent terminology: "runtime" better reflects the execution model.
- Simpler `Set` usage without needing to manage pointers.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #39
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-18 20:52:34 +00:00
e85cf7ceff refactor: make lambda expression types immutable (#38)
## Summary

- Change Abstraction, Application, and Variable to use private fields with getter methods.
- Return value types instead of pointers from constructors.
- Update all type switches to match value types instead of pointer types.

## Test plan

- [x] All existing tests pass (`make test`).

Reviewed-on: #38
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 22:00:54 +00:00
c2aa77cb92 refactor: remove visitor pattern (#37)
## Description

The codebase previously used the visitor pattern for traversing lambda calculus expressions.
This was a hold-over from avoiding the Go-idiomatic way of handling types.
This PR removes the visitor pattern in favor of direct method implementations.

- Remove `Visitor` interface from `expression.go`.
- Remove `Accept` methods from `Abstraction`, `Application`, and `Variable`.
- Remove `Accept` from `Expression` interface.
- Delete `stringify.go` and move `String()` logic directly into each type.
- Add compile-time interface checks (`var _ Expression = (*Type)(nil)`).
- Update `expr.Expression` to embed `fmt.Stringer` instead of declaring `String() string`.

### Decisions

- Moved `String()` implementations directly into each expression type rather than using a separate recursive function, as each type's string representation is simple enough to be self-contained.

## Benefits

- Simpler, more idiomatic Go code using type methods instead of visitor pattern.
- Reduced indirection and fewer files to maintain.
- Compile-time interface satisfaction checks catch implementation errors early.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Closes #36

Reviewed-on: #37
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 20:46:07 +00:00
52d40adcc6 chore: remove unused deltanet package (#35)
## Description

The `deltanet` package was an unused stub in the codebase.
This PR removes it to reduce clutter.

- Removed `pkg/deltanet/deltanet.go`.
- Removed `pkg/deltanet/node.go`.

Closes #34

## Benefits

- Reduces codebase complexity by removing unused code.
- Eliminates potential confusion from an incomplete stub package.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #35
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 19:56:58 +00:00
1974ad582f refactor: move event system to reducer, remove engine package (#32)
## Description

This PR completes the MVC-inspired refactoring by moving the event system from the engine into the reducer.
The engine package is now removed entirely, as the reducer handles both reduction logic and lifecycle events.

- Add `pkg/reducer/events.go` with `StartEvent`, `StepEvent`, and `StopEvent`.
- Extend `Reducer` interface to embed `Emitter[Event]` and add `Expression()` method.
- Update `NormalOrderReducer` to embed `BaseEmitter` and emit lifecycle events during reduction.
- Update all plugins to attach to `Reducer` instead of `Engine`.
- Remove `internal/engine` package entirely.
- Add `Off()` method to `BaseEmitter` to complete the `Emitter` interface.
- Fix `Emitter.On` signature to use generic type `E` instead of `string`.

### Decisions

- The `Reducer` interface now combines reduction logic with event emission, making it the single orchestration point.
- Plugins attach directly to the reducer, simplifying the architecture.
- The `Expression()` method on `Reducer` provides access to current state for plugins.

## Benefits

- Simpler architecture with one fewer abstraction layer.
- Plugins are now mode-agnostic - they work with any `Reducer` implementation.
- Cleaner separation: reducers handle reduction, plugins observe via events.
- Easier to add new evaluation modes - just implement `Reducer` with embedded emitter.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #32
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 00:27:36 +00:00
f8e1223463 refactor: extract Reducer interface and update engine to use abstractions (#31)
## Description

This PR builds on #30 to complete the abstraction layer for multi-mode evaluation support.
The engine now accepts abstract `expr.Expression` and `reducer.Reducer` types instead of concrete lambda types.

- Add `pkg/reducer/reducer.go` with `Reducer` interface defining `Reduce(expr.Expression, onStep) expr.Expression`.
- Add `pkg/lambda/reducer.go` with `NormalOrderReducer` that wraps the existing `ReduceAll` logic.
- Update `engine.Engine` to store `expr.Expression` and `reducer.Reducer` instead of `*lambda.Expression`.
- Update plugins to use `expr.Expression.String()` directly (no pointer dereference needed).
- Update main and tests to instantiate `NormalOrderReducer` and pass it to the engine.

### Decisions

- The `Reducer.Reduce` method returns the final expression and calls `onStep` after each reduction step with the current state.
- `NormalOrderReducer` type-asserts to `lambda.Expression` internally; other expression types are returned unchanged.
- The engine updates its `Expression` field both during reduction (via `onStep`) and after completion.

## Benefits

- The engine is now fully decoupled from lambda-specific types.
- New evaluation modes can be added by implementing `expr.Expression` and `reducer.Reducer`.
- Plugins work with any expression type that implements `expr.Expression`.
- Prepares the codebase for SKI combinators, typed lambda calculus, or other future modes.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Closes #30

Reviewed-on: #31
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-16 23:42:07 +00:00
e0114c736d refactor: extract abstract Expression interface (#30)
## Description

The codebase currently couples the engine and plugins directly to `lambda.Expression`.
This PR introduces an abstract `expr.Expression` interface to enable future support for multiple evaluation modes.

- Add `pkg/expr/expr.go` with an `Expression` interface requiring a `String()` method.
- Update `lambda.Expression` to embed `expr.Expression`.
- Add `String()` method to `Abstraction`, `Application`, and `Variable` types.
- Update plugins to use `String()` instead of `lambda.Stringify()`.

### Decisions

- The `expr.Expression` interface is minimal (only `String()`) to avoid over-constraining future expression types.
- The engine still stores `*lambda.Expression` directly rather than `expr.Expression`, because Go's interface semantics require pointer indirection for in-place mutation during reduction.
- Future evaluation modes will implement their own concrete types satisfying `expr.Expression`.

## Benefits

- Establishes a foundation for supporting multiple evaluation modes (SKI combinators, typed lambda calculus, etc.).
- Plugins now use the abstract `String()` method, making them more decoupled from the lambda-specific implementation.
- Prepares the codebase for a Reducer interface abstraction in a future PR.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #30
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-16 23:37:31 +00:00
5c54f4e195 fix: correct event handler registration in plugins (#29)
## Description

This PR fixes incorrect event handler registration in two plugins that were introduced in the refactoring.
The bugs prevented the plugins from functioning as intended.

Fixed issues:
- Statistics plugin was registering `plugin.Step` for `StopEvent` instead of `plugin.Stop`, preventing statistics from being printed at the end of execution.
- Logs plugin was listening to `StopEvent` instead of `StepEvent`, causing it to log only once at the end instead of on each reduction step.

## Benefits

Statistics are now correctly printed at the end of execution.
Debug logs now correctly show each reduction step instead of just the final state.
Plugins now work as originally intended before the refactoring.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #29
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-14 00:35:02 +00:00
307b7ffd1e refactor: replace string-based emitter with type-safe generic event system (#28)
## Description

This PR refactors the event emitter system from a string-based message passing approach to a type-safe generic implementation using typed events.
The previous system relied on string message names which were error-prone and lacked compile-time safety.
This refactoring introduces a generic `BaseEmitter[E comparable]` that provides type safety while consolidating the various tracker packages into a unified plugins architecture.

Key changes:
- Replace `Emitter` with generic `BaseEmitter[E comparable]` for type-safe event handling.
- Add `Event` type enumeration with `StartEvent`, `StepEvent`, and `StopEvent` constants.
- Create `Listener[E]` interface with `BaseListener` implementation for better abstraction.
- Consolidate `explanation`, `performance`, and `statistics` packages into unified `internal/plugins` package.
- Simplify CLI initialization by using plugin constructors that handle their own event subscriptions.
- Add `Items()` iterator method to `Set` for idiomatic Go 1.23+ range loops over sets.

### Decisions

Use generics for type-safe event handling.
This provides compile-time guarantees that event types match their handlers while maintaining flexibility for future event types.

Consolidate trackers into plugins architecture.
Previously separate packages (`explanation`, `performance`, `statistics`) now live under `internal/plugins`, making the plugin pattern explicit and easier to extend.

Plugin constructors self-register with engine.
Each plugin's `New*` constructor now handles its own event subscriptions, reducing boilerplate in the main CLI.

## Benefits

Type safety prevents runtime errors from typos in event names.
The compiler now catches mismatched event types at compile time rather than failing silently at runtime.

Cleaner plugin architecture makes adding new features easier.
New plugins follow a consistent pattern and live in a single location.

Reduced boilerplate in main CLI.
Plugin initialization is now a single function call rather than manual event registration.

Better testability through interface-based design.
The `Listener[E]` interface allows for easier mocking and testing of event handlers.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #28
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-14 00:30:21 +00:00
335ce95c50 feat: add comment support to saccharine language (#25)
## Description

The saccharine language previously lacked comment support, preventing proper code documentation.
This PR implements '#' comment syntax similar to Python.
Comments can appear on their own line or at the end of a line, with all content after '#' ignored until the next newline or EOF.
The tokenizer now detects '#' and skips characters appropriately without creating tokens.

### Decisions

Comments are silently consumed during tokenization rather than being preserved as tokens, keeping the token stream clean for the parser.
The implementation preserves newlines after comments by using the iterator's Back() method, allowing them to be processed as soft breaks.

## Benefits

Developers can now document their saccharine code with inline and full-line comments.
The implementation is minimal and efficient, adding no overhead to the token stream.
Tests verify that comments work correctly in various positions without breaking code execution.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Closes #24

Reviewed-on: #25
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 02:00:00 +00:00
19652563a4 docs: add issue linking instructions to PR workflow (#23)
## Description

This PR updates CLAUDE.md to include documentation about linking issues in pull requests.
The documentation explains how to use the "Closes #<number>" syntax in commit messages and PR descriptions to automatically link and close issues when PRs are merged.

### Changes

- Added "Linking issues" section to Pull Request Management in [CLAUDE.md](CLAUDE.md:81-82).
- Documented the use of `Closes #<number>` syntax for automatic issue closure.

## Benefits

Developers will know to properly link issues in their PRs.
Issues will be automatically closed when their corresponding PRs are merged.
The repository will maintain better traceability between issues and PRs.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #23
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:45:53 +00:00
dec9af0244 feat: add issue templates for Gitea (#22)
## Description

This PR implements issue templates for the lambda repository to improve issue tracking and developer experience.
The templates follow Gitea best practices and provide clear guidance for users when creating issues.
Three distinct templates were created to handle different types of issues: feature requests, bug reports, and general issues.

### Changes

- Created [.gitea/ISSUE_TEMPLATE/feature.md](.gitea/ISSUE_TEMPLATE/feature.md) for feature requests and enhancements.
- Created [.gitea/ISSUE_TEMPLATE/bug.md](.gitea/ISSUE_TEMPLATE/bug.md) for bug reports.
- Created [.gitea/ISSUE_TEMPLATE/general.md](.gitea/ISSUE_TEMPLATE/general.md) for general issues.
- Updated [CLAUDE.md](CLAUDE.md) with issue management and workflow documentation.

### Decisions

All templates use Markdown format with YAML frontmatter, following the existing PR template style.
Each template includes pre-populated metadata (title prefix, labels, target branch).
Templates provide inline HTML comments to guide users without cluttering the final issue.
The templates are consistent with conventional commit format (feat:, fix:, etc.).

## Benefits

Users will have clear guidance when creating issues, reducing back-and-forth communication.
Issues will be more structured and contain necessary information for developers.
Pre-populated labels and titles ensure consistent issue categorization.
The templates align with the existing PR template style for a cohesive experience.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #22
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:43:36 +00:00
dbc3c5a8d4 Improve testing infrastructure with dynamic discovery and validation (#20)
## Summary

This PR enhances the testing infrastructure with dynamic test discovery, automated validation, and improved error handling.

## Changes

### Testing Infrastructure
- Added `TestSamplesValidity` integration test that validates all test files against their expected output.
- Implemented dynamic test discovery using `filepath.Glob` to automatically find all `.test` files.
- Renamed `benchmark_test.go` to `lambda_test.go` for better naming consistency.
- Consolidated helper functions into a single `runSample` function.
- Replaced all error handling with `assert` for consistent and clear test output.
- Required all `.test` files to have corresponding `.expected` files.

### Iterator Improvements
- Added `Swap` method to iterator for better reduction algorithm.
- Improved reduction algorithm with LIFO-based iterator implementation.

### Build System
- Added `make test` target to run tests without benchmarks.
- Updated Makefile help text to include the new test target.

### Test Cases
- Added new test cases with expected outputs: `church_5^5`, `church_6^6`, `fast_list_2^30`, `list_2^30`.
- Added validation files for all test cases.

## Test plan

- Run tests with expected output validation.
- Run benchmarks to ensure performance is maintained.
- Verify make targets work correctly.

Reviewed-on: #20
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-13 01:20:47 +00:00
15c904ccc9 feat: improve reduction algorithm with LIFO-based iterator (#15)
## Description

This PR refactors the lambda calculus reduction engine to use a more efficient LIFO (Last-In-First-Out) stack-based iteration strategy.
Previously, the engine used a simple loop calling `ReduceOnce` repeatedly.
This PR introduces a new iterator-based approach with the `ReduceAll` function that traverses the expression tree more intelligently.

Changes include:

- Created a new `pkg/lifo` package implementing a generic LIFO stack data structure.
- Added `pkg/lambda/iterator.go` with an `Iterator` type for traversing lambda expressions.
- Refactored `pkg/lambda/reduce.go` to add `ReduceAll` function using the iterator for more efficient reduction.
- Updated `internal/engine/engine.go` to use `ReduceAll` instead of looping `ReduceOnce`.
- Renamed sample test files from `.txt` to `.test` extension.
- Fixed `.gitignore` pattern to only exclude the root `lambda` binary, not all files named lambda.
- Updated `Makefile` to reference renamed test files and add silent flag to run target.

### Decisions

- Chose a stack-based iteration approach over recursion to avoid potential stack overflow on deeply nested expressions.
- Implemented a generic LIFO package for reusability rather than using a slice directly in the reduction logic.
- Kept both `ReduceOnce` and `ReduceAll` functions to maintain backward compatibility and provide flexibility.

## Performance

Benchmark results comparing main branch vs this PR on Apple M3:

| Test | Before (ms/op) | After (ms/op) | Change |
|------|----------------|---------------|--------|
| Thunk | 0.014 | 0.014 | 0.00% |
| Fast | 1.29 | 1.20 | **-7.04%** |
| Simple | 21.51 | 6.45 | **-70.01%** |
| Church | 157.67 | 43.00 | -76.788% |
| Saccharine | 185.25 | 178.99 | **-3.38%** |

**Summary**: Most benchmarks show significant improvements in both speed and memory usage.
The Church benchmark shows a regression that needs investigation.

## Benefits

- More efficient expression tree traversal with the iterator pattern.
- Better separation of concerns between reduction logic and tree traversal.
- Generic LIFO stack can be reused in other parts of the codebase.
- Cleaner engine implementation with callback-based step emission.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #15
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-12 02:16:07 +00:00
609fe05250 feat: add benchmark target to Makefile (#14)
## Description

This PR adds benchmarking capabilities to the lambda interpreter.
The benchmarks measure performance across all sample files in the samples folder.
This enables consistent performance testing and helps track optimization improvements over time.

Changes in this PR:
- Added new `bench` target to Makefile for running Go benchmarks.
- Created `benchmark_test.go` with sub-benchmarks for each sample file (Church, Fast, Saccharine, Simple, Thunk).
- Used `b.Run` for organizing sample-specific sub-benchmarks and `b.Loop` for efficient iteration.
- Configured benchmarks to use fixed iterations (10x) and 4 CPU cores for reproducible results.

### Decisions

Used `b.Loop()` instead of traditional `for i := 0; i < b.N; i++` pattern.
This is the modern Go benchmarking idiom that provides better performance measurement.

Benchmarks run the full pipeline (parse, compile, execute, stringify) to measure end-to-end performance for each sample.

## Benefits

Provides quantitative performance metrics for the lambda interpreter.
Enables tracking performance improvements or regressions across different sample complexities.
Consistent benchmark configuration (fixed iterations, CPU cores) ensures reproducible results for comparison.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #14
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 22:48:26 +00:00
0eff85f8fa feat: add output flag (#13)
## Description

The lambda CLI previously only wrote output to stdout using shell redirection.
This PR adds support for writing results to files using the `-o` flag.
This is implemented using a new `Destination` interface that mirrors the existing `Source` pattern.

Changes:
- Added `Destination` interface with `StdoutDestination` and `FileDestination` implementations.
- Added `-o` flag to CLI argument parser for output file specification.
- Updated `Config` to use `Destination` instead of direct output handling.
- Refactored main to use `Destination.Write()` for result output.
- Updated Makefile targets (`run`, `profile`, `explain`) to use `-o` flag instead of shell redirection.

### Decisions

The `-o` flag defaults to stdout when not specified or when set to `-`.
This maintains backward compatibility while providing explicit file output capability.

## Benefits

- Cleaner command-line interface without shell redirection.
- Symmetric design with `Source` interface for input.
- More portable across different shells and environments.
- Explicit output handling improves code clarity.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #13
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 22:14:48 +00:00
62699a0e37 feat: add file input flag (#12)
## Description

The lambda CLI previously only supported inline string expressions and stdin input.
This PR adds support for reading lambda expressions from files using the `-f` flag.
This makes it easier to work with larger programs stored in files.

Changes:
- Added `FileSource` type to `internal/config/source.go` for reading from file paths.
- Added `-f` flag to CLI argument parser with validation to prevent conflicting inputs.
- Updated Makefile targets (`run`, `profile`, `explain`) to use `-f` flag instead of stdin redirection.

### Decisions

The `-f` flag takes precedence over positional arguments.
If both are specified, an error is returned to avoid ambiguity.

## Benefits

- More intuitive workflow for file-based lambda programs.
- Cleaner Makefile targets without stdin redirection.
- Consistent with common CLI conventions (e.g., `grep -f`, `awk -f`).

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #12
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 21:43:11 +00:00
90c205db2e refactor: improve Makefile structure and usability (#11)
## Description

This PR refactors the Makefile to improve usability, maintainability, and cross-platform compatibility.
The changes modernize the Makefile structure and make it more user-friendly.

Changes made:
- Renamed the `it` target to `build` for better clarity and conventional naming.
- Added a `help` target as the default goal to display available targets and their descriptions.
- Moved the TEST variable to the top with other configuration variables for better organization.
- Updated binary name from `lambda.exe` to `lambda` and used `${BINARY_NAME}` variable consistently throughout.
- Replaced all `@` prefixes with the `.SILENT:` directive for cleaner syntax.
- Added a `clean` target to remove all build artifacts (binary, program.out, profile directory).
- Made the `graph` target cross-platform by replacing macOS-specific `open` command with file:// URL echo.
- Updated .gitignore to include the `lambda` binary.

### Decisions

- Used `.SILENT:` directive instead of individual `@` prefixes for a cleaner, more maintainable Makefile.
- Made `help` the default target so users can run `make` without arguments to see available commands.
- Removed platform-specific commands (like `open`) in favor of cross-platform alternatives.

## Benefits

- Improved discoverability: Users can run `make` to see all available targets.
- Better maintainability: Using `${BINARY_NAME}` variable consistently makes future changes easier.
- Cross-platform compatibility: Removed macOS-specific commands.
- Cleaner syntax: `.SILENT:` directive eliminates repetitive `@` prefixes.
- More conventional: Renamed `it` to `build` follows standard Makefile conventions.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (\`<type>/<description>\`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #11
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 21:13:41 +00:00
72a0afbbc0 perf: implement structural sharing for expression trees (#10)
## Description

The profiler revealed that 75% of CPU time was spent on memory allocation, with the primary bottleneck being expression copying during variable substitution. Every time a variable was substituted with an expression, `replacement.Copy()` would create a full deep copy of the entire expression tree.

This PR refactors the lambda calculus interpreter from a mutable, pointer-based implementation to an immutable, structurally-shared implementation. Expressions are now immutable value types that share unchanged subtrees instead of copying them.

**Key changes:**
- Made expression fields unexported to enforce immutability.
- Converted `Substitute()` and `Rename()` from in-place mutation to functional methods that return new expressions.
- Implemented structural sharing: methods return the same pointer when nothing changes.
- Removed `Copy()` method entirely - no more deep copying during substitution.
- Added getter methods for accessing expression fields from outside the package.

### Decisions

**Immutability over mutation:** Switched from mutable `*Expression` pointers with in-place updates to immutable expressions that return new trees. This is a fundamental architectural shift but aligns with functional programming principles and enables structural sharing.

**Structural sharing strategy:** When `Substitute()` or `Rename()` encounters an unchanged subtree, it returns the original pointer instead of creating a new object. This is safe because expressions are now immutable.

**Field encapsulation:** Made all expression fields unexported (`Parameter` → `parameter`, `Body` → `body`, etc.) to prevent external mutation. Added getter methods for controlled access.

## Benefits

**Performance improvements** (measured across all samples):

| Sample      | Before CPU | After CPU | Improvement | Copy Overhead Eliminated |
|-------------|-----------|----------|-------------|--------------------------|
| **saccharine** | 320ms | 160ms | **50% faster** | 50ms (15.6% of total) |
| **church** | 230ms | 170ms | **26% faster** | 40ms (17.4% of total) |
| **simple** | 30ms | 20ms | **33% faster** | 10ms (33.3% of total) |

**Wall-clock improvements:**
- saccharine: 503ms → 303ms (40% faster)
- church: 404ms → 302ms (25% faster)

**Memory allocation eliminated:**
- Before: `runtime.mallocgcSmallScanNoHeader` consumed 10-50ms per sample
- After: **Completely eliminated from profile** 
- All `Copy()` method calls removed from hot path

**The optimization in action:**

Before:
```go
func Substitute(e *Expression, target string, replacement Expression) {
    switch typed := (*e).(type) {
    case *Variable:
        if typed.Value == target {
            *e = replacement.Copy()  // Deep copy entire tree!
        }
    }
}
```

After:
```go
func (v *Variable) Substitute(target string, replacement Expression) Expression {
    if v.value == target {
        return replacement  // Share pointer directly, no allocation
    }
    return v  // Unchanged, share self
}
```

**Codebase improvements:**
- More idiomatic functional programming style.
- Immutability prevents entire class of mutation bugs.
- Clearer ownership semantics (expressions are values, not mutable objects).
- Easier to reason about correctness (no action at a distance).

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`perf/structural-sharing`).
- [x] Tests pass (no test files exist, but build succeeds and profiling confirms correctness).
- [x] Documentation updated (added comments explaining structural sharing).

Reviewed-on: #10
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-11 02:15:38 +00:00
22019acbb1 feat: add PHONY declarations and docs target to Makefile (#9)
---
name: "Default Template"
about: "The default template for `lambda`."
title: "feat: add PHONY declarations and docs target to Makefile"
ref: "main"
assignees: []
labels: []
---

## Description

This PR improves the Makefile by adding proper `.PHONY` declarations and a new `docs` target.
The `.PHONY` declarations ensure make properly handles targets that don't create files.
The `docs` target provides an easy way to view Go package documentation locally using godoc.

Changes:
- Add `.PHONY` declaration for all non-file targets (`it`, `run`, `profile`, `explain`, `graph`, `docs`).
- Add `docs` target that runs godoc server on port 6060 with helpful usage message.

### Decisions

None.

## Benefits

- Prevents make from getting confused if files with target names exist.
- Provides convenient documentation viewing with `make docs`.
- Improves Makefile maintainability following best practices.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #9
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 23:29:30 +00:00
d831312dc3 feat: complete PR template frontmatter fields (#8)
## Description

Adds all recommended frontmatter fields to the Gitea pull request template.
This includes `about`, `ref`, `assignees`, and `labels` fields in addition to the required `name` and `title` fields.
These fields provide better template documentation and default PR settings.

## Benefits

- The PR template now includes a description (`about`) explaining its purpose.
- Default target branch is explicitly set to `main` via the `ref` field.
- Template follows Gitea best practices with all available frontmatter fields.
- Provides a complete, well-documented template structure for contributors.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #8
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 23:00:32 +00:00
c0353c8e1f feat: add required name field to PR template (#7)
## Description

Gitea requires a `name` field in the frontmatter of pull request templates.
This PR adds the required `name: "Default Template"` field to fix the template validation error.
Additionally updates CLAUDE.md to reference the PR template format and use lowercase for PR title placeholder.

## Benefits

- The pull request template is now valid and will be properly loaded by Gitea.
- Users creating PRs will see the template automatically populated.
- CLAUDE.md now documents the PR template format for consistency.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #7
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:54:43 +00:00
e63c0df410 fix: move PR template to correct Gitea location (#6)
## Description

Gitea requires the pull request template to be at `.gitea/PULL_REQUEST_TEMPLATE.md` (as a file), not in a `.gitea/PULL_REQUEST_TEMPLATE/` directory.
This PR moves the template from `.gitea/PULL_REQUEST_TEMPLATE/default.md` to `.gitea/PULL_REQUEST_TEMPLATE.md` to ensure it works correctly with Gitea.

## Benefits

- The pull request template will now be automatically loaded when creating new PRs in Gitea.
- Follows Gitea's documented convention for PR templates.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #6
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:49:24 +00:00
8b6c632e4d feat: add pull request template (#5)
## Description

Added a standardized pull request template to improve PR quality and consistency. The template provides a clear structure for contributors to follow when creating PRs.

The template includes:
- Title placeholder following conventional commit format (`<type>: <description>`)
- Description section with guidance on context, rationale, and changes
- Optional Decisions section for architectural choices
- Benefits section to highlight improvements
- Checklist for conventional commits, branch naming, tests, and documentation

### Decisions

Created the template in `.gitea/PULL_REQUEST_TEMPLATE/default.md` to support multiple template types in the future.

## Benefits

- Ensures PRs follow project conventions (conventional commits, branch naming)
- Provides clear structure for describing changes
- Helps reviewers understand context and benefits
- Reduces back-and-forth by reminding contributors of requirements
- Supports future expansion with additional template types

Reviewed-on: #5
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-10 22:44:41 +00:00
86 changed files with 2134 additions and 1284 deletions

View File

@@ -0,0 +1,58 @@
---
name: "Bug Report"
about: "Report a bug or unexpected behavior in the lambda runtime."
title: "fix: "
ref: "main"
assignees: []
labels:
- bug
---
## Context
<!--
Describe what you were trying to do when you encountered the bug.
Explain what you expected to happen.
-->
## Current Behavior
<!--
Describe what actually happened.
Be specific about the incorrect behavior or error.
-->
## Steps to Reproduce
<!--
Provide step-by-step instructions to reproduce the issue.
Include any relevant code, commands, or input.
-->
1.
2.
3.
## Expected Behavior
<!--
Describe what should happen instead.
-->
## Environment
<!--
Provide relevant information about your environment.
-->
- Lambda version:
- Go version:
- Operating system:
## Additional Context
<!--
Add any other context about the problem.
Include error messages, logs, or screenshots if applicable.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,44 @@
---
name: "Feature Request"
about: "Suggest a new feature or enhancement for the lambda runtime."
title: "feat: "
ref: "main"
assignees: []
labels:
- enhancement
---
## Context
<!--
Describe the problem or limitation you're encountering.
Explain why this feature would be valuable.
-->
## Proposed Solution
<!--
Describe your proposed solution or enhancement.
Be specific about what you want to see implemented.
-->
## Alternatives Considered
<!--
List any alternative solutions or approaches you've considered.
If none exist, omit this section.
-->
## Acceptance Criteria
<!--
List clear, testable criteria that define when this feature is complete.
Use bullet points starting with •
-->
## Additional Context
<!--
Add any other context, screenshots, or examples about the feature request.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,37 @@
---
name: "General Issue"
about: "Create an issue that doesn't fit other templates."
title: ""
ref: "main"
assignees: []
labels: []
---
## Context
<!--
Describe the background and context for this issue.
Explain why this issue exists.
-->
## Description
<!--
Provide a detailed description of what needs to be done.
Be clear and specific about the requirements.
-->
## Acceptance Criteria
<!--
List clear, testable criteria that define when this issue is complete.
Use bullet points starting with •
If none exist, omit this section.
-->
## Additional Context
<!--
Add any other relevant information, links, or references.
If none exist, omit this section.
-->

View File

@@ -0,0 +1,37 @@
---
name: "Default Template"
about: "The default template for `lambda`."
title: "<type>: <description>"
ref: "main"
assignees: []
labels: []
---
## Description
<!--
First, describe the context for the PR.
Then, explain why the PR exists.
Finally, in concise, sentence-long bullets, explain each change.
-->
### Decisions
<!--
List any major architectural decisions here.
If none exist, omit this section.
-->
## Benefits
<!--
List any major benefits here.
How would this PR improve the codebase/product?
-->
## Checklist
- [ ] Code follows conventional commit format.
- [ ] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

4
.gitignore vendored
View File

@@ -3,15 +3,13 @@
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore # https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
# #
# Binaries for programs and plugins # Binaries for programs and plugins
/lambda
*.exe *.exe
*.exe~ *.exe~
*.dll *.dll
*.so *.so
*.dylib *.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE # Output of the go coverage tool, specifically when used with LiteIDE
*.out *.out

View File

@@ -48,7 +48,7 @@ linters:
# More information: https://golangci-lint.run/usage/false-positives/#comments # More information: https://golangci-lint.run/usage/false-positives/#comments
# #
# Please uncomment the following line if your code is not using the godoc format # Please uncomment the following line if your code is not using the godoc format
- comments # - comments
# Common false positives # Common false positives
# feel free to remove this if you don't have any false positives # feel free to remove this if you don't have any false positives
@@ -126,6 +126,9 @@ linters:
# Blank import should be only in a main or test package, or have a comment justifying it. # Blank import should be only in a main or test package, or have a comment justifying it.
- name: blank-imports - name: blank-imports
# Packages should have comments of the form "Package x ...".
- name: package-comments
# context.Context() should be the first parameter of a function when provided as argument. # context.Context() should be the first parameter of a function when provided as argument.
- name: context-as-argument - name: context-as-argument
arguments: arguments:
@@ -157,6 +160,8 @@ linters:
arguments: arguments:
# make error messages clearer # make error messages clearer
- "sayRepetitiveInsteadOfStutters" - "sayRepetitiveInsteadOfStutters"
# require comments on public interface methods
- "checkPublicInterface"
# incrementing an integer variable by 1 is recommended to be done using the `++` operator # incrementing an integer variable by 1 is recommended to be done using the `++` operator
- name: increment-decrement - name: increment-decrement

View File

@@ -1,8 +1,12 @@
# Guide To `lambda` # Guide To `lambda`
Absolutely NO advertisement of Claude. ## Documentation Style
## Coding Styles Use full sentences.
Every sentence gets its own line in Markdown.
Every sentence ends in a period.
## Coding Style
### Conventional Commits ### Conventional Commits
@@ -16,6 +20,8 @@ Use conventional commit format: `<type>: <description>`.
- `fix: correct variable renaming in nested abstractions` - `fix: correct variable renaming in nested abstractions`
- `docs: update Makefile documentation` - `docs: update Makefile documentation`
DO NOT advertise Claude.
### Branch Names ### Branch Names
Use format: `<type>/<description>` with kebab-case. Use format: `<type>/<description>` with kebab-case.
@@ -29,16 +35,71 @@ Use format: `<type>/<description>` with kebab-case.
- `docs/makefile-improvements` - `docs/makefile-improvements`
- `refactor/silent-directive` - `refactor/silent-directive`
DO NOT advertise Claude.
## Issue Management
Use the `tea` CLI (Gitea command-line tool) for issue operations.
**Common commands**:
- `tea issues list` - List all issues.
- `tea issues <number>` - View details of a specific issue.
- `tea issues create --title "<title>" --body "<description>"` - Create a new issue.
- `tea issues close <number>` - Close an issue.
**Reading issues**: Use `tea issues <number>` to read the full details of an issue before starting work.
## Issue Workflow
When working on an issue:
1. Read the issue using `tea issues <number>` to understand requirements.
2. Create a feature branch following the branch naming convention.
3. Make commits following the conventional commit format.
4. Submit a pull request when ready.
**Important**: Never commit directly to `main`.
All work must be done in a feature branch and submitted via pull request.
## Pull Request Management ## Pull Request Management
Use the `tea` CLI (Gitea command-line tool) for PR operations instead of `gh`. Use the `tea` CLI (Gitea command-line tool) for PR operations instead of `gh`.
**Common commands**: **Common commands**:
- `tea pr create` - Create a new pull request. - `tea pr create --title "<title>" --description "<body>"` - Create a new pull request.
- `tea pr list` - List pull requests. - `tea pr list` - List pull requests.
- `tea pr checkout <number>` - Check out a PR locally. - `tea pr checkout <number>` - Check out a PR locally.
- `tea pr close <number>` - Close a pull request. - `tea pr close <number>` - Close a pull request.
- `tea pr merge <number>` - Merge a pull request. - `tea pr merge <number>` - Merge a pull request.
**Creating PRs**: Always create PRs in a branch other than `main`, to the `main` branch unless specified otherwise. **Note**: Use `--description` (not `--body`) for PR body content.
**Creating PRs**: Always create PRs in a branch other than `main`, to the `main` branch unless specified otherwise. ALWAYS FOLLOW THE PR TEMPLATE, EXACTLY.
**Linking issues**: When a PR solves an issue, reference the issue in both the commit message and PR description using `Closes #<number>`.
This automatically links and closes the issue when the PR is merged.
### Updating PRs
When pushing additional changes to an existing PR, add a comment summarizing the new commits.
This keeps reviewers informed of what changed since the initial PR description.
Use the `tea` CLI to add comments to pull requests:
```bash
tea comment <number> "Comment text"
```
#### Examples
```bash
# Add a comment to PR #42
tea comment 42 "Updated implementation based on feedback"
# Add a multi-line comment
tea comment 42 "Summary of changes:
- Fixed bug in reducer
- Added new tests"
```

View File

@@ -48,7 +48,7 @@ The "source code" for a work means the preferred form of the work for making mod
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code runtime used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those
subprograms and other parts of the work. subprograms and other parts of the work.

View File

@@ -1,21 +1,55 @@
BINARY_NAME=lambda.exe BINARY_NAME=lambda
it:
@ go build -o ${BINARY_NAME} ./cmd/lambda
@ chmod +x ${BINARY_NAME}
TEST=simple TEST=simple
run: it .PHONY: help build run profile explain graph docs test bench lint clean
@ ./lambda.exe - < ./samples/$(TEST).txt > program.out .DEFAULT_GOAL := help
.SILENT:
profile: it help:
@ ./lambda.exe -p profile/cpu.prof - < ./samples/$(TEST).txt > program.out echo "Available targets:"
echo " build - Build the lambda executable"
echo " run - Build and run the lambda runtime (use TEST=<name> to specify sample)"
echo " profile - Build and run with CPU profiling enabled"
echo " explain - Build and run with explanation mode and profiling"
echo " graph - Generate and open CPU profile visualization"
echo " docs - Start local godoc server on port 6060"
echo " test - Run tests for all samples"
echo " bench - Run benchmarks for all samples"
echo " lint - Run golangci-lint on all packages"
echo " clean - Remove all build artifacts"
explain: it build:
@ ./lambda.exe -x -p profile/cpu.prof - < ./samples/$(TEST).txt > program.out go build -o ${BINARY_NAME} ./cmd/lambda
chmod +x ${BINARY_NAME}
run: build
./${BINARY_NAME} -s -f ./tests/$(TEST).test -o program.out
profile: build
./${BINARY_NAME} -p profile/cpu.prof -f ./tests/$(TEST).test -o program.out
explain: build
./${BINARY_NAME} -x -p profile/cpu.prof -f ./tests/$(TEST).test -o program.out > explain.out
graph: graph:
@ go tool pprof -raw -output=profile/cpu.raw profile/cpu.prof go tool pprof -raw -output=profile/cpu.raw profile/cpu.prof
@ go tool pprof -svg profile/cpu.prof > profile/cpu.svg go tool pprof -svg profile/cpu.prof > profile/cpu.svg
@ open profile/cpu.svg echo ">>> View at 'file://$(PWD)/profile/cpu.svg'"
docs:
echo ">>> View at 'http://localhost:6060/pkg/git.maximhutz.com/max/lambda/'"
go run golang.org/x/tools/cmd/godoc@latest -http=:6060
test:
go test -v ./cmd/lambda
bench:
go test -bench=. -benchtime=10x -cpu=4 ./cmd/lambda
lint:
go run github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest run ./...
clean:
rm -f ${BINARY_NAME}
rm -f program.out
rm -rf profile/

View File

@@ -1,6 +1,6 @@
# lambda # lambda
Making a lambda calculus interpreter in Go. Making a lambda calculus runtime in Go.
## Things to talk about ## Things to talk about

View File

@@ -1,76 +1,32 @@
// Package main defines the 'lambda' command-line interface (CLI).
package main package main
import ( import (
"fmt"
"os" "os"
"git.maximhutz.com/max/lambda/internal/cli" "github.com/spf13/cobra"
"git.maximhutz.com/max/lambda/internal/config"
"git.maximhutz.com/max/lambda/internal/engine"
"git.maximhutz.com/max/lambda/internal/explanation"
"git.maximhutz.com/max/lambda/internal/performance"
"git.maximhutz.com/max/lambda/internal/statistics"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/saccharine"
) )
func main() { func Lambda() *cobra.Command {
// Parse CLI arguments. cmd := &cobra.Command{
options, err := config.FromArgs() Use: "lambda",
cli.HandleError(err) Short: "Lambda calculus interpreter",
Long: "A lambda calculus interpreter supporting multiple representations.",
logger := options.GetLogger() RunE: func(cmd *cobra.Command, _ []string) error {
logger.Info("using program arguments", "args", os.Args) return cmd.Help()
logger.Info("parsed CLI options", "options", options) },
// Get input.
input, err := options.Source.Extract()
cli.HandleError(err)
// Parse code into syntax tree.
ast, err := saccharine.Parse(input)
cli.HandleError(err)
logger.Info("parsed syntax tree", "tree", ast)
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
logger.Info("compiled λ expression", "tree", lambda.Stringify(compiled))
// Create reduction engine.
process := engine.New(options, &compiled)
// If the user selected to track CPU performance, attach a profiler to the
// process.
if options.Profile != "" {
profiler := performance.Track(options.Profile)
process.On("start", profiler.Start)
process.On("end", profiler.End)
} }
// If the user selected to produce a step-by-step explanation, attach an cmd.AddCommand(LambdaConvert())
// observer here. cmd.AddCommand(LambdaEngine())
if options.Explanation { cmd.AddCommand(LambdaReduce())
explanation.Track(process)
}
// If the user opted to track statistics, attach a tracker here, too. return cmd
if options.Statistics { }
statistics := statistics.Track()
process.On("start", statistics.Start) func main() {
process.On("step", statistics.Step) lambda := Lambda()
process.On("end", statistics.End) if err := lambda.Execute(); err != nil {
} os.Exit(1)
}
// If the user selected for verbose debug logs, attach a reduction tracker.
if options.Verbose {
process.On("step", func() {
logger.Info("reduction", "tree", lambda.Stringify(compiled))
})
}
process.Run()
// Return the final reduced result.
fmt.Println(lambda.Stringify(compiled))
} }

View File

@@ -0,0 +1,95 @@
package main
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
)
// inferReprFromPath returns the repr type based on file extension.
func inferReprFromPath(path string) (string, error) {
switch ext := strings.ToLower(filepath.Ext(path)); ext {
case ".lambda", ".lam", ".lc":
return "lambda", nil
case ".saccharine", ".sch":
return "saccharine", nil
default:
return "", fmt.Errorf("unknown file extension '%s'", ext)
}
}
func LambdaConvert() *cobra.Command {
var inputReprFlag, outputReprFlag string
cmd := &cobra.Command{
Use: "convert <input-file> <output-file>",
Aliases: []string{"conv"},
Short: "Convert between lambda calculus representations",
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) != 2 {
return cmd.Help()
}
var err error
inputPath, outputPath := args[0], args[1]
// Use flag if provided, otherwise infer from extension.
inputRepr := inputReprFlag
if inputRepr == "" {
if inputRepr, err = inferReprFromPath(inputPath); err != nil {
return fmt.Errorf("input file: %w", err)
}
}
outputRepr := outputReprFlag
if outputRepr == "" {
if outputRepr, err = inferReprFromPath(outputPath); err != nil {
return fmt.Errorf("output file: %w", err)
}
}
// Read input file.
input, err := os.ReadFile(inputPath)
if err != nil {
return fmt.Errorf("reading input file: %w", err)
}
r := GetRegistry()
// Parse input into syntax tree.
repr, err := r.Unmarshal(string(input), inputRepr)
if err != nil {
return fmt.Errorf("parsing input: %w", err)
}
// Convert to output repr if different.
result, err := r.ConvertTo(repr, outputRepr)
if err != nil {
return fmt.Errorf("converting %s to %s: %w", inputRepr, outputRepr, err)
}
// Marshal output.
output, err := r.Marshal(result)
if err != nil {
return fmt.Errorf("unmarshaling output: %w", err)
}
// Write output file.
err = os.WriteFile(outputPath, []byte(output), 0644)
if err != nil {
return fmt.Errorf("writing output file: %w", err)
}
return nil
},
}
cmd.Flags().StringVarP(&inputReprFlag, "input", "i", "", "Input representation (inferred from extension if unset)")
cmd.Flags().StringVarP(&outputReprFlag, "output", "o", "", "Output representation (inferred from extension if unset)")
return cmd
}

View File

@@ -0,0 +1,20 @@
package main
import (
"github.com/spf13/cobra"
)
func LambdaEngine() *cobra.Command {
cmd := &cobra.Command{
Use: "engine",
Aliases: []string{"eng"},
Short: "Information about available engines",
RunE: func(cmd *cobra.Command, _ []string) error {
return cmd.Help()
},
}
cmd.AddCommand(LambdaEngineList())
return cmd
}

View File

@@ -0,0 +1,26 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
)
func LambdaEngineList() *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Aliases: []string{"ls"},
Short: "List available engines",
RunE: func(*cobra.Command, []string) error {
r := GetRegistry()
for engine := range r.ListEngines() {
fmt.Println(engine.Name())
}
return nil
},
}
return cmd
}

108
cmd/lambda/lambda_reduce.go Normal file
View File

@@ -0,0 +1,108 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
"git.maximhutz.com/max/lambda/internal/cli"
"git.maximhutz.com/max/lambda/internal/registry"
)
func LambdaReduce() *cobra.Command {
var inputReprFlag string
var engineFlag string
cmd := &cobra.Command{
Use: "reduce <input-file>",
Short: "Reduce a lambda calculus expression",
SilenceUsage: true,
Aliases: []string{"run"},
RunE: func(cmd *cobra.Command, args []string) error {
var err error
if len(args) != 1 {
return cmd.Help()
}
inputPath := args[0]
// Get input source.
var source cli.Source
if inputPath == "-" {
source = cli.StdinSource{}
} else {
source = cli.FileSource{Path: inputPath}
}
destination := cli.StdoutDestination{}
r := GetRegistry()
// Get input.
input, err := source.Extract()
if err != nil {
return err
}
// Use flag if provided, otherwise infer from extension.
inputRepr := inputReprFlag
if inputRepr == "" {
if inputRepr, err = inferReprFromPath(inputPath); err != nil {
return fmt.Errorf("input file: %w", err)
}
}
// Find engine.
var engine registry.Engine
if engineFlag == "" {
if engine, err = r.GetDefaultEngine(inputRepr); err != nil {
return err
}
} else {
if engine, err = r.GetEngine(engineFlag); err != nil {
return err
}
}
// Parse code into syntax tree.
repr, err := r.Unmarshal(input, inputRepr)
if err != nil {
return err
}
// Compile expression to lambda calculus.
compiled, err := r.ConvertTo(repr, "lambda")
if err != nil {
return err
}
// Create process.
process, err := engine.Load(compiled)
if err != nil {
return err
}
// Run reduction.
for process.Step(1) {
}
// Return the final reduced result.
result, err := process.Get()
if err != nil {
return err
}
output, err := r.Marshal(result)
if err != nil {
return err
}
return destination.Write(output)
},
}
cmd.Flags().StringVarP(&inputReprFlag, "input", "i", "", "Input representation (inferred from extension if unset)")
cmd.Flags().StringVarP(&engineFlag, "engine", "e", "", "Reduction engine (inferred from '--input' if unset)")
return cmd
}

26
cmd/lambda/registry.go Normal file
View File

@@ -0,0 +1,26 @@
package main
import (
"git.maximhutz.com/max/lambda/internal/registry"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/engine/normalorder"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/saccharine"
)
func GetRegistry() *registry.Registry {
r := registry.New()
// Codecs
(registry.RegisterConversion(r, convert.Saccharine2Lambda, "saccharine", "lambda"))
(registry.RegisterConversion(r, convert.Lambda2Saccharine, "lambda", "saccharine"))
// Engines
(registry.RegisterEngine(r, normalorder.NewProcess, "normalorder", "lambda"))
// Marshalers
(registry.RegisterCodec(r, lambda.Codec{}, "lambda"))
(registry.RegisterCodec(r, saccharine.Codec{}, "saccharine"))
return r
}

7
go.mod
View File

@@ -1,3 +1,10 @@
module git.maximhutz.com/max/lambda module git.maximhutz.com/max/lambda
go 1.25.5 go 1.25.5
require github.com/spf13/cobra v1.10.2
require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
)

11
go.sum Normal file
View File

@@ -0,0 +1,11 @@
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

2
internal/cli/cli.go Normal file
View File

@@ -0,0 +1,2 @@
// Package cli package provides various utilities to the 'lambda' program.
package cli

View File

@@ -0,0 +1,29 @@
package cli
import (
"fmt"
"os"
)
// A Destination is method of writing output to the user.
type Destination interface {
// Write data to this destination.
Write(data string) error
}
// An StdoutDestination writes to stdout.
type StdoutDestination struct{}
// Write outputs to standard output.
func (d StdoutDestination) Write(data string) error {
fmt.Println(data)
return nil
}
// A FileDestination writes to a file.
type FileDestination struct{ Path string }
// Write outputs to a file.
func (d FileDestination) Write(data string) error {
return os.WriteFile(d.Path, []byte(data+"\n"), 0644)
}

View File

@@ -1,18 +0,0 @@
// Package "cli" provides miscellaneous helper functions.
package cli
import (
"fmt"
"os"
)
// A helper function to handle errors in the program. If it is given an error,
// the program will exist, and print the error.
func HandleError(err error) {
if err == nil {
return
}
fmt.Fprintln(os.Stderr, "ERROR:", err)
os.Exit(1)
}

44
internal/cli/source.go Normal file
View File

@@ -0,0 +1,44 @@
package cli
import (
"io"
"os"
)
// A Source is a method of extracting input from the user.
type Source interface {
// Extract fetches data from this source.
Extract() (string, error)
}
// A StringSource is defined by a string.
type StringSource struct{ Data string }
// Extract pulls input data from the internal string.
func (s StringSource) Extract() (string, error) { return s.Data, nil }
// A StdinSource pulls from standard input.
type StdinSource struct{}
// Extract pulls input data from standard input.
func (s StdinSource) Extract() (string, error) {
data, err := io.ReadAll(os.Stdin)
if err != nil {
return "", err
}
return string(data), nil
}
// A FileSource reads from a file.
type FileSource struct{ Path string }
// Extract pulls input data from the file source.
func (s FileSource) Extract() (string, error) {
data, err := os.ReadFile(s.Path)
if err != nil {
return "", err
}
return string(data), nil
}

View File

@@ -1,11 +0,0 @@
// Package "config" parses ad handles the user settings given to the program.
package config
// Configuration settings for the program.
type Config struct {
Source Source // The source code given to the program.
Verbose bool // Whether or not to print debug logs.
Explanation bool // Whether or not to print an explanation of the reduction.
Profile string // If not nil, print a CPU profile during execution.
Statistics bool // Whether or not to print statistics.
}

View File

@@ -1,23 +0,0 @@
package config
import (
"log/slog"
"os"
)
// Returns a structured logger with the appropriate configurations.
func (c Config) GetLogger() *slog.Logger {
// By default, only print out errors.
level := slog.LevelError
// If the user set the output to be "VERBOSE", return the debug logs.
if c.Verbose {
level = slog.LevelInfo
}
return slog.New(
slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: level,
}),
)
}

View File

@@ -1,39 +0,0 @@
package config
import (
"flag"
"fmt"
)
// Extract the program configuration from the command-line arguments.
func FromArgs() (*Config, error) {
// Relevant flags.
verbose := flag.Bool("v", false, "Verbosity. If set, the program will print logs.")
explanation := flag.Bool("x", false, "Explanation. Whether or not to show all reduction steps.")
statistics := flag.Bool("s", false, "Statistics. If set, the process will print various statistics about the run.")
profile := flag.String("p", "", "CPU profiling. If an output file is defined, the program will profile its execution and dump its results into it.")
flag.Parse()
// There must only be one input argument.
if flag.NArg() == 0 {
return nil, fmt.Errorf("no input given")
} else if flag.NArg() > 1 {
return nil, fmt.Errorf("more than 1 command-line argument")
}
// Parse source type.
var source Source
if flag.Arg(0) == "-" {
source = StdinSource{}
} else {
source = StringSource{Data: flag.Arg(0)}
}
return &Config{
Source: source,
Verbose: *verbose,
Explanation: *explanation,
Profile: *profile,
Statistics: *statistics,
}, nil
}

View File

@@ -1,29 +0,0 @@
package config
import (
"io"
"os"
)
// A method of extracting input from the user.
type Source interface {
// Fetch data from this source.
Extract() (string, error)
}
// A source defined by a string.
type StringSource struct{ Data string }
func (s StringSource) Extract() (string, error) { return s.Data, nil }
// A source pulling from standard input.
type StdinSource struct{}
func (s StdinSource) Extract() (string, error) {
data, err := io.ReadAll(os.Stdin)
if err != nil {
return "", err
}
return string(data), nil
}

View File

@@ -1,32 +0,0 @@
// Package "engine" provides an extensible interface for users to interfact with
// λ-calculus.
package engine
import (
"git.maximhutz.com/max/lambda/internal/config"
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// A process for reducing one λ-expression.
type Engine struct {
Config *config.Config
Expression *lambda.Expression
emitter.Emitter
}
// Create a new engine, given an unreduced λ-expression.
func New(config *config.Config, expression *lambda.Expression) *Engine {
return &Engine{Config: config, Expression: expression}
}
// Begin the reduction process.
func (e Engine) Run() {
e.Emit("start")
for lambda.ReduceOnce(e.Expression) {
e.Emit("step")
}
e.Emit("end")
}

View File

@@ -1,32 +0,0 @@
// Package "explanation" provides a observer to gather the reasoning during the
// reduction, and present a thorough explanation to the user for each step.
package explanation
import (
"fmt"
"git.maximhutz.com/max/lambda/internal/engine"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// Track the reductions made by a reduction proess.
type Tracker struct {
process *engine.Engine
}
// Attaches a new explanation tracker to a process.
func Track(process *engine.Engine) *Tracker {
tracker := &Tracker{process: process}
process.On("start", tracker.Start)
process.On("step", tracker.Step)
return tracker
}
func (t *Tracker) Start() {
fmt.Println(lambda.Stringify(*t.process.Expression))
}
func (t *Tracker) Step() {
fmt.Println(" =", lambda.Stringify(*t.process.Expression))
}

View File

@@ -1,53 +0,0 @@
// Package "performance" provides a tracker to observer CPU performance during
// execution.
package performance
import (
"os"
"path/filepath"
"runtime/pprof"
)
// Observes a reduction process, and publishes a CPU performance profile on
// completion.
type Tracker struct {
File string
filePointer *os.File
Error error
}
// Create a performance tracker that outputs a profile to "file".
func Track(file string) *Tracker {
return &Tracker{File: file}
}
// Begin profiling.
func (t *Tracker) Start() {
var absPath string
absPath, t.Error = filepath.Abs(t.File)
if t.Error != nil {
return
}
t.Error = os.MkdirAll(filepath.Dir(absPath), 0777)
if t.Error != nil {
return
}
t.filePointer, t.Error = os.Create(absPath)
if t.Error != nil {
return
}
t.Error = pprof.StartCPUProfile(t.filePointer)
if t.Error != nil {
return
}
}
// Stop profiling.
func (t *Tracker) End() {
pprof.StopCPUProfile()
t.filePointer.Close()
}

View File

@@ -0,0 +1,58 @@
package registry
import (
"fmt"
"reflect"
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a type-erased codec that serializes and deserializes expressions
// as Expr values, regardless of the underlying representation type.
type Codec interface {
codec.Codec[Expr]
// InType returns the name of the representation this codec handles.
InType() string
}
// A registeredCodec adapts a typed codec.Codec[T] into the type-erased Codec
// interface. It wraps decoded values into Expr on decode, and extracts the
// underlying T from an Expr on encode.
type registeredCodec[T any] struct {
codec codec.Codec[T]
inType string
}
func (c registeredCodec[T]) Decode(s string) (Expr, error) {
t, err := c.codec.Decode(s)
if err != nil {
return nil, err
}
return NewExpr(c.inType, t), nil
}
func (c registeredCodec[T]) Encode(r Expr) (string, error) {
t, ok := r.Data().(T)
if !ok {
dataType := reflect.TypeOf(r.Data())
allowedType := reflect.TypeFor[T]()
return "", fmt.Errorf("Codec for '%s' cannot parse '%s'", allowedType, dataType)
}
return c.codec.Encode(t)
}
func (c registeredCodec[T]) InType() string { return c.inType }
// RegisterCodec registers a typed codec under the given representation name.
// Returns an error if a codec for that representation is already registered.
func RegisterCodec[T any](registry *Registry, m codec.Codec[T], inType string) error {
if _, ok := registry.codecs[inType]; ok {
return fmt.Errorf("Codec for '%s' already registered", inType)
}
registry.codecs[inType] = registeredCodec[T]{m, inType}
return nil
}

View File

@@ -0,0 +1,59 @@
package registry
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Conversion is a type-erased transformation from one representation to
// another. It operates on Expr values, hiding the underlying representation
// types.
type Conversion interface {
// InType returns the name of the source representation.
InType() string
// OutType returns the name of the target representation.
OutType() string
// Run applies the conversion to the given expression. Returns an error if
// the expression's data does not match the expected source type.
Run(Expr) (Expr, error)
}
// A registeredConversion adapts a typed codec.Conversion[T, U] into the
// type-erased Conversion interface. It extracts the underlying T from an Expr,
// applies the conversion, and wraps the result as a new Expr.
type registeredConversion[T, U any] struct {
conversion codec.Conversion[T, U]
inType, outType string
}
func (c registeredConversion[T, U]) Run(expr Expr) (Expr, error) {
t, ok := expr.Data().(T)
if !ok {
return nil, fmt.Errorf("could not parse '%v' as '%s'", t, c.inType)
}
u, err := c.conversion(t)
if err != nil {
return nil, err
}
return NewExpr(c.outType, u), nil
}
func (c registeredConversion[T, U]) InType() string { return c.inType }
func (c registeredConversion[T, U]) OutType() string { return c.outType }
// RegisterConversion registers a typed conversion function between two
// representations.
func RegisterConversion[T, U any](
registry *Registry,
conversion codec.Conversion[T, U],
inType, outType string,
) error {
registry.converter.Add(registeredConversion[T, U]{conversion, inType, outType})
return nil
}

View File

@@ -0,0 +1,30 @@
package registry
// A Converter is a directed graph of conversions between representations. Each
// node is a representation name, and each edge is a Conversion.
type Converter struct {
data map[string][]Conversion
}
// NewConverter creates an empty Converter with no registered conversions.
func NewConverter() *Converter {
return &Converter{data: map[string][]Conversion{}}
}
// Add registers a conversion, adding an edge from its source representation
// to its target representation.
func (g *Converter) Add(c Conversion) {
conversionsFromIn, ok := g.data[c.InType()]
if !ok {
conversionsFromIn = []Conversion{}
}
conversionsFromIn = append(conversionsFromIn, c)
g.data[c.InType()] = conversionsFromIn
}
// ConversionsFrom returns all conversions that have the given representation
// as their source type.
func (g *Converter) ConversionsFrom(t string) []Conversion {
return g.data[t]
}

View File

@@ -0,0 +1,58 @@
package registry
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/engine"
)
// An Engine is a type-erased evaluation engine that can load an expression
// into a runnable Process.
type Engine interface {
// Load prepares an expression for evaluation, returning a Process. Returns
// an error if the expression's data does not match the engine's expected
// representation type.
Load(Expr) (Process, error)
// Name returns the name of this engine.
Name() string
// InType returns the name of the representation this engine operates on.
InType() string
}
// A registeredEngine adapts a typed engine.Engine[T] into the type-erased
// Engine interface. It extracts the underlying T from an Expr before passing it
// to the engine.
type registeredEngine[T any] struct {
engine engine.Engine[T]
name string
inType string
}
func (e registeredEngine[T]) InType() string { return e.inType }
func (e registeredEngine[T]) Name() string { return e.name }
func (e registeredEngine[T]) Load(expr Expr) (Process, error) {
t, ok := expr.Data().(T)
if !ok {
return nil, fmt.Errorf("incorrect format '%s' for engine '%s'", expr.Repr(), e.inType)
}
process, err := e.engine(t)
if err != nil {
return nil, err
}
return registeredProcess[T]{process, e.inType}, nil
}
// RegisterEngine registers a typed engine under the given name. Returns an
// error if an engine with that name is already registered.
func RegisterEngine[T any](registry *Registry, e engine.Engine[T], name, inType string) error {
if _, ok := registry.engines[name]; ok {
return fmt.Errorf("engine '%s' already registered", name)
}
registry.engines[name] = &registeredEngine[T]{e, name, inType}
return nil
}

26
internal/registry/expr.go Normal file
View File

@@ -0,0 +1,26 @@
package registry
// An Expr is a type-erased lambda calculus expression. It can have any type of
// representation, so long as that type is known to the registry it is handled
// by.
type Expr interface {
// Repr returns the name of the underlying representation. Two expressions
// with the same Repr() are assumed to have the same representation type.
Repr() string
// Data returns the underlying expression data.
Data() any
}
// A baseExpr is the default implementation of Expr.
type baseExpr struct {
id string
data any
}
func (e baseExpr) Repr() string { return e.id }
func (e baseExpr) Data() any { return e.data }
// NewExpr creates an Expr with the given representation name and data.
func NewExpr(id string, data any) Expr { return baseExpr{id, data} }

View File

@@ -0,0 +1,35 @@
package registry
import (
"git.maximhutz.com/max/lambda/pkg/engine"
)
// A Process is a type-erased reduction process that operates on Expr values.
type Process interface {
engine.Process[Expr]
// InType returns the name of the representation this process operates on.
InType() string
}
// A registeredProcess adapts a typed engine.Process[T] into the type-erased
// Process interface. It wraps the result of Get into an Expr.
type registeredProcess[T any] struct {
process engine.Process[T]
inType string
}
func (p registeredProcess[T]) InType() string { return p.inType }
func (p registeredProcess[T]) Get() (Expr, error) {
s, err := p.process.Get()
if err != nil {
return nil, err
}
return NewExpr(p.inType, s), nil
}
func (p registeredProcess[T]) Step(i int) bool {
return p.process.Step(i)
}

View File

@@ -0,0 +1,153 @@
// Package registry defines a structure to hold all available representations,
// engines, and conversions between them.
package registry
import (
"fmt"
"iter"
"maps"
)
// A Registry holds all representations, conversions, codecs, and engines
// available to the program.
type Registry struct {
codecs map[string]Codec
converter *Converter
engines map[string]Engine
}
// New makes an empty registry.
func New() *Registry {
return &Registry{
codecs: map[string]Codec{},
converter: NewConverter(),
engines: map[string]Engine{},
}
}
// GetEngine finds an engine based on its name. Returns an error if an engine
// with that name cannot be found.
func (r Registry) GetEngine(name string) (Engine, error) {
e, ok := r.engines[name]
if !ok {
return nil, fmt.Errorf("engine '%s' not found", name)
}
return e, nil
}
// ListEngines returns all available engines to the registry.
func (r Registry) ListEngines() iter.Seq[Engine] {
return maps.Values(r.engines)
}
// GetDefaultEngine infers the preferred engine for a representation. Returns an
// error if one cannot be chosen.
func (r *Registry) GetDefaultEngine(id string) (Engine, error) {
for _, engine := range r.engines {
if engine.InType() == id {
return engine, nil
}
}
return r.GetEngine("normalorder")
// return nil, fmt.Errorf("no engine for '%s'", id)
}
// ConvertTo attempts to convert an expression of one type of representation to
// another. Returns the converted expression, otherwise an error.
//
// It can convert between any two types of representations, given there is a
// valid conversion path between them. It uses BFS to traverse a graph of
// conversion edges, and converts along the shortest path.
func (r *Registry) ConvertTo(expr Expr, outType string) (Expr, error) {
path, err := r.ConversionPath(expr.Repr(), outType)
if err != nil {
return nil, err
}
result := expr
for _, conversion := range path {
result, err = conversion.Run(result)
if err != nil {
return nil, fmt.Errorf("converting '%s' to '%s': %w", conversion.InType(), conversion.OutType(), err)
}
}
return result, err
}
// Marshal serializes an expression, given that representation has a codec.
// Returns an error if the representation is not registered, or it has no codec.
func (r *Registry) Marshal(expr Expr) (string, error) {
m, ok := r.codecs[expr.Repr()]
if !ok {
return "", fmt.Errorf("no marshaler for '%s'", expr.Repr())
}
return m.Encode(expr)
}
// Unmarshal deserializes an expression. Returns an error if the representation
// or a codec for it is not registered.
func (r *Registry) Unmarshal(s string, outType string) (Expr, error) {
m, ok := r.codecs[outType]
if !ok {
return nil, fmt.Errorf("no marshaler for '%s'", outType)
}
return m.Decode(s)
}
func reverse[T any](list []T) []T {
if list == nil {
return list
}
reversed := []T{}
for i := len(list) - 1; i >= 0; i-- {
reversed = append(reversed, list[i])
}
return reversed
}
// ConversionPath attempts to find a set of valid conversions that (if applied)
// convert one representation to another. Returns an error if no path can be
// found.
func (r *Registry) ConversionPath(from, to string) ([]Conversion, error) {
backtrack := map[string]Conversion{}
iteration := []string{from}
for len(iteration) > 0 {
nextIteration := []string{}
for _, item := range iteration {
for _, conversion := range r.converter.ConversionsFrom(item) {
if _, ok := backtrack[conversion.OutType()]; ok {
continue
}
nextIteration = append(nextIteration, conversion.OutType())
backtrack[conversion.OutType()] = conversion
}
}
iteration = nextIteration
}
reversedPath := []Conversion{}
current := to
for current != from {
conversion, ok := backtrack[current]
if !ok {
return nil, fmt.Errorf("no valid conversion from '%s' to '%s'", from, to)
}
reversedPath = append(reversedPath, conversion)
current = conversion.InType()
}
return reverse(reversedPath), nil
}

View File

@@ -1,28 +0,0 @@
// Package "statistics" provides a way to observer reduction speed during
// execution.
package statistics
import (
"fmt"
"strings"
)
// Statistics for a specific reduction.
type Results struct {
StepsTaken uint64 // Number of steps taken during execution.
TimeElapsed uint64 // The time (ms) taken for execution to complete.
}
// Returns the average number of operations per second of the execution.
func (r Results) OpsPerSecond() float32 {
return float32(r.StepsTaken) / (float32(r.TimeElapsed) / 1000)
}
// Format the results as a string.
func (r Results) String() string {
builder := strings.Builder{}
fmt.Fprintln(&builder, "Time Spent:", r.TimeElapsed, "ms")
fmt.Fprintln(&builder, "Steps:", r.StepsTaken)
fmt.Fprintln(&builder, "Speed:", r.OpsPerSecond(), "ops")
return builder.String()
}

View File

@@ -1,36 +0,0 @@
package statistics
import (
"fmt"
"os"
"time"
)
// An observer, to track reduction performance.
type Tracker struct {
start time.Time
steps uint64
}
// Create a new reduction performance tracker.
func Track() *Tracker {
return &Tracker{}
}
func (t *Tracker) Start() {
t.start = time.Now()
t.steps = 0
}
func (t *Tracker) Step() {
t.steps++
}
func (t *Tracker) End() {
results := Results{
StepsTaken: t.steps,
TimeElapsed: uint64(time.Since(t.start).Milliseconds()),
}
fmt.Fprint(os.Stderr, results.String())
}

20
pkg/codec/codec.go Normal file
View File

@@ -0,0 +1,20 @@
// Package codec defines processes to convert between different representations
// of lambda calculus, and serialize the different representations.
package codec
// A Conversion is a function that turns one representation into another.
// Returns an error if the input expression cannot be converted.
type Conversion[T, U any] = func(T) (U, error)
// A Codec is an object that can serialize/deserialize one type of
// representation. It is assumed that for any x ∋ T, Decode(Encode(x)) = x.
type Codec[T any] interface {
// Encode takes an expression, and returns its serialized format, as a
// string. Returns an error if the expression cannot be serialized.
Encode(T) (string, error)
// Decode takes the serialized format of an expression, and returns its true
// value. Returns an error if the string doesn't correctly represent any
// valid expression.
Decode(string) (T, error)
}

View File

@@ -1,3 +1,5 @@
// Package convert defined some standard conversions between various types of
// representations.
package convert package convert
import ( import (
@@ -7,41 +9,41 @@ import (
"git.maximhutz.com/max/lambda/pkg/saccharine" "git.maximhutz.com/max/lambda/pkg/saccharine"
) )
func convertAtom(n *saccharine.Atom) lambda.Expression { func encodeAtom(n *saccharine.Atom) lambda.Expression {
return lambda.NewVariable(n.Name) return lambda.Variable{Name: n.Name}
} }
func convertAbstraction(n *saccharine.Abstraction) lambda.Expression { func encodeAbstraction(n *saccharine.Abstraction) lambda.Expression {
result := SaccharineToLambda(n.Body) result := encodeExpression(n.Body)
parameters := n.Parameters parameters := n.Parameters
// If the function has no parameters, it is a thunk. Lambda calculus still // If the function has no parameters, it is a thunk. Lambda calculus still
// requires _some_ parameter exists, so generate one. // requires _some_ parameter exists, so generate one.
if len(parameters) == 0 { if len(parameters) == 0 {
freeVars := lambda.GetFreeVariables(result) freeVars := lambda.GetFree(result)
freshName := lambda.GenerateFreshName(freeVars) freshName := lambda.GenerateFreshName(freeVars)
parameters = append(parameters, freshName) parameters = append(parameters, freshName)
} }
for i := len(parameters) - 1; i >= 0; i-- { for i := len(parameters) - 1; i >= 0; i-- {
result = lambda.NewAbstraction(parameters[i], result) result = lambda.Abstraction{Parameter: parameters[i], Body: result}
} }
return result return result
} }
func convertApplication(n *saccharine.Application) lambda.Expression { func encodeApplication(n *saccharine.Application) lambda.Expression {
result := SaccharineToLambda(n.Abstraction) result := encodeExpression(n.Abstraction)
arguments := []lambda.Expression{} arguments := []lambda.Expression{}
for _, argument := range n.Arguments { for _, argument := range n.Arguments {
convertedArgument := SaccharineToLambda(argument) encodeedArgument := encodeExpression(argument)
arguments = append(arguments, convertedArgument) arguments = append(arguments, encodeedArgument)
} }
for _, argument := range arguments { for _, argument := range arguments {
result = lambda.NewApplication(result, argument) result = lambda.Application{Abstraction: result, Argument: argument}
} }
return result return result
@@ -51,24 +53,24 @@ func reduceLet(s *saccharine.LetStatement, e lambda.Expression) lambda.Expressio
var value lambda.Expression var value lambda.Expression
if len(s.Parameters) == 0 { if len(s.Parameters) == 0 {
value = SaccharineToLambda(s.Body) value = encodeExpression(s.Body)
} else { } else {
value = convertAbstraction(saccharine.NewAbstraction(s.Parameters, s.Body)) value = encodeAbstraction(&saccharine.Abstraction{Parameters: s.Parameters, Body: s.Body})
} }
return lambda.NewApplication( return lambda.Application{
lambda.NewAbstraction(s.Name, e), Abstraction: lambda.Abstraction{Parameter: s.Name, Body: e},
value, Argument: value,
) }
} }
func reduceDeclare(s *saccharine.DeclareStatement, e lambda.Expression) lambda.Expression { func reduceDeclare(s *saccharine.DeclareStatement, e lambda.Expression) lambda.Expression {
freshVar := lambda.GenerateFreshName(lambda.GetFreeVariables(e)) freshVar := lambda.GenerateFreshName(lambda.GetFree(e))
return lambda.NewApplication( return lambda.Application{
lambda.NewAbstraction(freshVar, e), Abstraction: lambda.Abstraction{Parameter: freshVar, Body: e},
SaccharineToLambda(s.Value), Argument: encodeExpression(s.Value),
) }
} }
func reduceStatement(s saccharine.Statement, e lambda.Expression) lambda.Expression { func reduceStatement(s saccharine.Statement, e lambda.Expression) lambda.Expression {
@@ -82,8 +84,8 @@ func reduceStatement(s saccharine.Statement, e lambda.Expression) lambda.Express
} }
} }
func convertClause(n *saccharine.Clause) lambda.Expression { func encodeClause(n *saccharine.Clause) lambda.Expression {
result := SaccharineToLambda(n.Returns) result := encodeExpression(n.Returns)
for i := len(n.Statements) - 1; i >= 0; i-- { for i := len(n.Statements) - 1; i >= 0; i-- {
result = reduceStatement(n.Statements[i], result) result = reduceStatement(n.Statements[i], result)
@@ -92,17 +94,45 @@ func convertClause(n *saccharine.Clause) lambda.Expression {
return result return result
} }
func SaccharineToLambda(n saccharine.Expression) lambda.Expression { func encodeExpression(s saccharine.Expression) lambda.Expression {
switch n := n.(type) { switch s := s.(type) {
case *saccharine.Atom: case *saccharine.Atom:
return convertAtom(n) return encodeAtom(s)
case *saccharine.Abstraction: case *saccharine.Abstraction:
return convertAbstraction(n) return encodeAbstraction(s)
case *saccharine.Application: case *saccharine.Application:
return convertApplication(n) return encodeApplication(s)
case *saccharine.Clause: case *saccharine.Clause:
return convertClause(n) return encodeClause(s)
default: default:
panic(fmt.Errorf("unknown expression type: %T", n)) panic(fmt.Errorf("unknown expression type: %T", s))
} }
} }
func decodeExression(l lambda.Expression) saccharine.Expression {
switch l := l.(type) {
case lambda.Variable:
return &saccharine.Atom{Name: l.Name}
case lambda.Abstraction:
return &saccharine.Abstraction{
Parameters: []string{l.Parameter},
Body: decodeExression(l.Body)}
case lambda.Application:
return &saccharine.Application{
Abstraction: decodeExression(l.Abstraction),
Arguments: []saccharine.Expression{decodeExression(l.Argument)}}
default:
panic(fmt.Errorf("unknown expression type: %T", l))
}
}
// Lambda2Saccharine converts a pure lambda calculus expression into its
// Saccharine counterpart.
func Lambda2Saccharine(l lambda.Expression) (saccharine.Expression, error) {
return decodeExression(l), nil
}
// Saccharine2Lambda desugars a saccharine expression into pure lambda calculus.
func Saccharine2Lambda(s saccharine.Expression) (lambda.Expression, error) {
return encodeExpression(s), nil
}

View File

@@ -1,6 +0,0 @@
// Package "deltanet" is a reduction strategy using ∆-nets.
package deltanet
type Graph struct {
Nodes []Node
}

View File

@@ -1,94 +0,0 @@
package deltanet
/** ------------------------------------------------------------------------- */
// A connection between exactly two nodes in a graph.
type Edge struct {
A, B Node
}
// Returns all nodes the edge is connected to.
func (e Edge) GetConnections() []Node { return []Node{e.A, e.B} }
// Determines if a node is connected via this edge.
func (e Edge) IsConnected(n Node) bool { return e.A == n || e.B == n }
// Swaps an edges connected with one node, for another.
func (e *Edge) Swap(from Node, to Node) {
if e.A == from {
e.A = to
}
if e.B == from {
e.B = to
}
}
// Returns true if the edge is connected to each node via their pricniple ports.
func (e Edge) IsPrincipleEdge() bool {
return e.A.GetMainPort() == e && e.B.GetMainPort() == e
}
/** ------------------------------------------------------------------------- */
type Node interface {
// Returns the principle port that the node is attached to.
GetMainPort() Edge
// Returns all auxiliary ports that the node has. These ports are guaranteed
// to be ordered clockwise, as they would appear graphically.
GetAuxPorts() []Edge
// Returns the label of the node. May be blank.
GetLabel() string
}
/** ------------------------------------------------------------------------- */
type EraserNode struct {
Main Edge
}
func (n EraserNode) GetLabel() string { return "Ⓧ" }
func (n EraserNode) GetMainPort() Edge { return n.Main }
func (n EraserNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */
type ReplicatorNode struct {
Main Edge
Level uint
Aux []Edge
Deltas []int
}
func (n ReplicatorNode) GetLabel() string { return "" }
func (n ReplicatorNode) GetMainPort() Edge { return n.Main }
func (n ReplicatorNode) GetAuxPorts() []Edge { return n.Aux }
// Returns the level of the replicator node.
func (n ReplicatorNode) GetLevel() uint { return n.Level }
/** ------------------------------------------------------------------------- */
type FanNode struct {
Label string
Main Edge
Left, Right Edge
}
func (n FanNode) GetLabel() string { return n.Label }
func (n FanNode) GetMainPort() Edge { return n.Main }
func (n FanNode) GetAuxPorts() []Edge { return []Edge{n.Left, n.Right} }
/** ------------------------------------------------------------------------- */
type TerminalNode struct {
Label string
Main Edge
}
func (n TerminalNode) GetLabel() string { return n.Label }
func (n TerminalNode) GetMainPort() Edge { return n.Main }
func (n TerminalNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */

View File

@@ -1,54 +0,0 @@
package emitter
import "git.maximhutz.com/max/lambda/pkg/set"
type Observer struct {
fn func()
message string
emitter *Emitter
}
type Emitter struct {
listeners map[string]*set.Set[*Observer]
}
func Ignore[T any](fn func()) func(T) {
return func(T) { fn() }
}
func (e *Emitter) On(message string, fn func()) *Observer {
observer := &Observer{
fn: fn,
message: message,
emitter: e,
}
if e.listeners == nil {
e.listeners = map[string]*set.Set[*Observer]{}
}
if e.listeners[message] == nil {
e.listeners[message] = set.New[*Observer]()
}
e.listeners[message].Add(observer)
return observer
}
func (o *Observer) Off() {
if o.emitter.listeners[o.message] == nil {
return
}
o.emitter.listeners[o.message].Remove(o)
}
func (e *Emitter) Emit(message string) {
if e.listeners[message] == nil {
return
}
for listener := range *e.listeners[message] {
listener.fn()
}
}

18
pkg/engine/engine.go Normal file
View File

@@ -0,0 +1,18 @@
// Package engine defines a general process of reducing a lambda calculus
// expression.
package engine
// A Process handles the reduction of a single expression.
type Process[T any] interface {
// Get the current state of the process.
// Returns an error if the current state cannot be represented.
Get() (T, error)
// Step performs reduction(s) on the representation. If the number of steps
// defined is less than zero, it will perform as many reductions as
// possible. Returns whether a reduction was performed.
Step(int) bool
}
// An Engine is an function that generates reduction processes.
type Engine[T any] = func(T) (Process[T], error)

View File

@@ -0,0 +1,42 @@
// Package normalorder contains an engine that reduces a 'lambda.Expression'
// in the normal order.
package normalorder
import (
"git.maximhutz.com/max/lambda/pkg/engine"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
type process struct {
expr lambda.Expression
}
func (e process) Get() (lambda.Expression, error) {
return e.expr, nil
}
func (e *process) Set(l lambda.Expression) error {
e.expr = l
return nil
}
func (e *process) Step(i int) bool {
for range i {
next, reduced := ReduceOnce(e.expr)
if !reduced {
return false
}
e.expr = next
}
return true
}
// NewProcess creates a new redution process.
func NewProcess(expression lambda.Expression) (engine.Process[lambda.Expression], error) {
return &process{expr: expression}, nil
}
var _ engine.Process[lambda.Expression] = (*process)(nil)
var _ engine.Engine[lambda.Expression] = NewProcess

View File

@@ -0,0 +1,39 @@
package normalorder
import "git.maximhutz.com/max/lambda/pkg/lambda"
// ReduceOnce attempts to apply a single reduction to a lambda expression.
// It returns (1) the final expression (reduced, or not), and (2) whether or not
// a reduction was applied.
//
// If a reduction is not applied, it returns the original expression.
func ReduceOnce(e lambda.Expression) (lambda.Expression, bool) {
switch e := e.(type) {
case lambda.Abstraction:
body, reduced := ReduceOnce(e.Body)
if reduced {
return lambda.Abstraction{Parameter: e.Parameter, Body: body}, true
}
return e, false
case lambda.Application:
if fn, fnOk := e.Abstraction.(lambda.Abstraction); fnOk {
return lambda.Substitute(fn.Body, fn.Parameter, e.Argument), true
}
abs, reduced := ReduceOnce(e.Abstraction)
if reduced {
return lambda.Application{Abstraction: abs, Argument: e.Argument}, true
}
arg, reduced := ReduceOnce(e.Argument)
if reduced {
return lambda.Application{Abstraction: e.Abstraction, Argument: arg}, true
}
return e, false
default:
return e, false
}
}

View File

@@ -1,35 +0,0 @@
package fifo
import "fmt"
type FIFO[T any] []T
func New[T any](items ...T) *FIFO[T] {
f := FIFO[T](items)
return &f
}
func (f *FIFO[T]) Push(item T) {
*f = append(*f, item)
}
func (f *FIFO[T]) Empty() bool {
return len(*f) == 0
}
func (f *FIFO[T]) MustPop() T {
var item T
*f, item = (*f)[:len(*f)-1], (*f)[len(*f)-1]
return item
}
func (f *FIFO[T]) Pop() (T, error) {
var item T
if f.Empty() {
return item, fmt.Errorf("stack is exhausted")
}
return f.MustPop(), nil
}

View File

@@ -1,35 +1,25 @@
/* // Package iterator defines a generic way to iterator over a slice of data.
Package "iterator"
*/
package iterator package iterator
import "fmt" import "fmt"
// An iterator over slices. // An Iterator traverses over slices.
type Iterator[T any] struct { type Iterator[T any] struct {
items []T items []T
index int index int
} }
// Create a new iterator, over a set of items. // Of creates a new iterator, over a set of defined items.
func Of[T any](items []T) *Iterator[T] { func Of[T any](items []T) *Iterator[T] {
return &Iterator[T]{items: items, index: 0} return &Iterator[T]{items: items, index: 0}
} }
// Returns the current position of the iterator. // Index returns the current position of the iterator.
func (i Iterator[T]) Index() int { func (i Iterator[T]) Index() int {
return i.index return i.index
} }
func (i Iterator[T]) Copy() *Iterator[T] { // Get returns the datum at the current position of the iterator.
return &Iterator[T]{items: i.items, index: i.index}
}
func (i *Iterator[T]) Sync(o *Iterator[T]) {
i.index = o.index
}
// Create a new iterator, over a set of items.
func (i Iterator[T]) Get() (T, error) { func (i Iterator[T]) Get() (T, error) {
var null T var null T
if i.Done() { if i.Done() {
@@ -39,22 +29,26 @@ func (i Iterator[T]) Get() (T, error) {
return i.items[i.index], nil return i.items[i.index], nil
} }
// MustGet is a version of Get, that panics if the datum cannot be returned.
func (i Iterator[T]) MustGet() T { func (i Iterator[T]) MustGet() T {
var null T t, err := i.Get()
if i.Done() { if err != nil {
return null panic(fmt.Errorf("cannot get current token: %w", err))
} }
return i.items[i.index] return t
} }
// Forward increments the iterator if the iterator is not yet at the end of the
// slice.
func (i *Iterator[T]) Forward() { func (i *Iterator[T]) Forward() {
if !i.Done() { if !i.Done() {
i.index++ i.index++
} }
} }
// Create a new iterator, over a set of items. // Next attempts to increment the iterator. Returns an error if it cannot be
// incremented.
func (i *Iterator[T]) Next() (T, error) { func (i *Iterator[T]) Next() (T, error) {
item, err := i.Get() item, err := i.Get()
if err == nil { if err == nil {
@@ -64,22 +58,37 @@ func (i *Iterator[T]) Next() (T, error) {
return item, err return item, err
} }
// Create a new iterator, over a set of items. // Back decrements the iterator. If the iterator is already at the beginning of
// the slice, this is a no-op.
func (i *Iterator[T]) Back() { func (i *Iterator[T]) Back() {
i.index = max(i.index-1, 0) i.index = max(i.index-1, 0)
} }
// Returns the current position of the iterator. // Done returns whether the iterator is at the end of the slice or not.
func (i Iterator[T]) Done() bool { func (i Iterator[T]) Done() bool {
return i.index == len(i.items) return i.index == len(i.items)
} }
func Do[T any, U any](i *Iterator[T], fn func(i *Iterator[T]) (U, error)) (U, error) { // While increments the iterator as long as the current item satisfies the
i2 := i.Copy() // predicate. The first item that does not match is left unconsumed.
func (i *Iterator[T]) While(fn func(T) bool) {
for !i.Done() {
if !fn(i.MustGet()) {
return
}
i.Forward()
}
}
out, err := fn(i2) // Try attempts to perform an operation using the iterator. If the operation
if err == nil { // succeeds, the iterator keeps its new position. If the operation fails, the
i.Sync(i2) // iterator is rolled back, and an error is returned.
func Try[T any, U any](i *Iterator[T], fn func(i *Iterator[T]) (U, error)) (U, error) {
saved := i.index
out, err := fn(i)
if err != nil {
i.index = saved
} }
return out, err return out, err

27
pkg/lambda/codec.go Normal file
View File

@@ -0,0 +1,27 @@
package lambda
import (
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a [codec.Codec] that serializes lambda calculus expressions.
type Codec struct{}
// Decode parses a string as lambda calculus.
// Returns an error if it cannot.
func (m Codec) Decode(s string) (Expression, error) {
tokens, err := scan(s)
if err != nil {
return nil, err
}
return parse(tokens)
}
// Encode turns a lambda calculus expression into a string.
// Returns an error if it cannot.
func (m Codec) Encode(e Expression) (string, error) {
return Stringify(e), nil
}
var _ codec.Codec[Expression] = (*Codec)(nil)

View File

@@ -1,70 +0,0 @@
package lambda
type Expression interface {
Accept(Visitor)
Copy() Expression
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
Parameter string
Body Expression
}
func (a *Abstraction) Copy() Expression {
return NewAbstraction(a.Parameter, a.Body.Copy())
}
func (a *Abstraction) Accept(v Visitor) {
v.VisitAbstraction(a)
}
func NewAbstraction(parameter string, body Expression) *Abstraction {
return &Abstraction{Parameter: parameter, Body: body}
}
/** ------------------------------------------------------------------------- */
type Application struct {
Abstraction Expression
Argument Expression
}
func (a *Application) Copy() Expression {
return NewApplication(a.Abstraction.Copy(), a.Argument.Copy())
}
func (a *Application) Accept(v Visitor) {
v.VisitApplication(a)
}
func NewApplication(function Expression, argument Expression) *Application {
return &Application{Abstraction: function, Argument: argument}
}
/** ------------------------------------------------------------------------- */
type Variable struct {
Value string
}
func (v *Variable) Copy() Expression {
return NewVariable(v.Value)
}
func (v *Variable) Accept(visitor Visitor) {
visitor.VisitVariable(v)
}
func NewVariable(name string) *Variable {
return &Variable{Value: name}
}
/** ------------------------------------------------------------------------- */
type Visitor interface {
VisitAbstraction(*Abstraction)
VisitApplication(*Application)
VisitVariable(*Variable)
}

View File

@@ -6,7 +6,9 @@ import (
"git.maximhutz.com/max/lambda/pkg/set" "git.maximhutz.com/max/lambda/pkg/set"
) )
func GenerateFreshName(used *set.Set[string]) string { // GenerateFreshName generates a variable name that is not in the used set.
// This function does not mutate the used set.
func GenerateFreshName(used set.Set[string]) string {
for i := uint64(0); ; i++ { for i := uint64(0); ; i++ {
attempt := "_" + string(strconv.AppendUint(nil, i, 10)) attempt := "_" + string(strconv.AppendUint(nil, i, 10))

View File

@@ -1,20 +1,27 @@
package lambda package lambda
import "git.maximhutz.com/max/lambda/pkg/set" import (
"fmt"
func GetFreeVariables(e Expression) *set.Set[string] { "git.maximhutz.com/max/lambda/pkg/set"
)
// GetFree returns the set of all free variable names in the expression.
// This function does not mutate the input expression.
// The returned set is newly allocated and can be modified by the caller.
func GetFree(e Expression) set.Set[string] {
switch e := e.(type) { switch e := e.(type) {
case *Variable: case Variable:
return set.New(e.Value) return set.New(e.Name)
case *Abstraction: case Abstraction:
vars := GetFreeVariables(e.Body) vars := GetFree(e.Body)
vars.Remove(e.Parameter) vars.Remove(e.Parameter)
return vars return vars
case *Application: case Application:
vars := GetFreeVariables(e.Abstraction) vars := GetFree(e.Abstraction)
vars.Merge(GetFreeVariables(e.Argument)) vars.Merge(GetFree(e.Argument))
return vars return vars
default: default:
return nil panic(fmt.Errorf("unknown expression type: %v", e))
} }
} }

View File

@@ -1,14 +1,18 @@
package lambda package lambda
func IsFreeVariable(n string, e Expression) bool { import "fmt"
// IsFree returns true if the variable name n occurs free in the expression.
// This function does not mutate the input expression.
func IsFree(e Expression, n string) bool {
switch e := e.(type) { switch e := e.(type) {
case *Variable: case Variable:
return e.Value == n return e.Name == n
case *Abstraction: case Abstraction:
return e.Parameter != n && IsFreeVariable(n, e.Body) return e.Parameter != n && IsFree(e.Body, n)
case *Application: case Application:
return IsFreeVariable(n, e.Abstraction) || IsFreeVariable(n, e.Argument) return IsFree(e.Abstraction, n) || IsFree(e.Argument, n)
default: default:
return false panic(fmt.Errorf("unknown expression type: %v", e))
} }
} }

31
pkg/lambda/lambda.go Normal file
View File

@@ -0,0 +1,31 @@
// Package lambda defines the AST for the untyped lambda calculus.
package lambda
// An Expression is a node in the lambda calculus abstract syntax tree.
// It is a sealed interface; only types in this package may implement it.
type Expression interface {
expression()
}
// An Abstraction binds a single parameter over a body expression.
type Abstraction struct {
Parameter string
Body Expression
}
func (a Abstraction) expression() {}
// An Application applies an abstraction to a single argument.
type Application struct {
Abstraction Expression
Argument Expression
}
func (a Application) expression() {}
// A Variable is a named reference to a bound or free variable.
type Variable struct {
Name string
}
func (v Variable) expression() {}

80
pkg/lambda/parse.go Normal file
View File

@@ -0,0 +1,80 @@
package lambda
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/token"
)
type tokenIterator = iterator.Iterator[lambdaToken]
func parseVariable(i *tokenIterator) (Expression, error) {
if tok, err := token.ParseRawToken(i, tokenAtom); err != nil {
return nil, fmt.Errorf("expected variable (col %d): %w", i.Index(), err)
} else {
return Variable{Name: tok.Value}, nil
}
}
func parseAbstraction(i *tokenIterator) (Expression, error) {
if _, err := token.ParseRawToken(i, tokenSlash); err != nil {
return nil, fmt.Errorf("no backslash (col %d): %w", i.Index(), err)
} else if param, err := token.ParseRawToken(i, tokenAtom); err != nil {
return nil, fmt.Errorf("no param (col %d): %w", i.Index(), err)
} else if _, err := token.ParseRawToken(i, tokenDot); err != nil {
return nil, fmt.Errorf("no dot (col %d): %w", i.Index(), err)
} else if body, err := parseExpression(i); err != nil {
return nil, err
} else {
return Abstraction{Parameter: param.Value, Body: body}, nil
}
}
func parseApplication(i *tokenIterator) (Expression, error) {
if _, err := token.ParseRawToken(i, tokenOpenParen); err != nil {
return nil, fmt.Errorf("no opening paren (col %d): %w", i.Index(), err)
} else if abstraction, err := parseExpression(i); err != nil {
return nil, fmt.Errorf("expected function expression: %w", err)
} else if argument, err := parseExpression(i); err != nil {
return nil, fmt.Errorf("expected argument expression: %w", err)
} else if _, err := token.ParseRawToken(i, tokenCloseParen); err != nil {
return nil, fmt.Errorf("no closing paren (col %d): %w", i.Index(), err)
} else {
return Application{Abstraction: abstraction, Argument: argument}, nil
}
}
func parseExpression(i *tokenIterator) (Expression, error) {
peek, err := i.Get()
if err != nil {
return nil, err
}
switch peek.Type {
case tokenOpenParen:
return parseApplication(i)
case tokenSlash:
return parseAbstraction(i)
case tokenAtom:
return parseVariable(i)
default:
return nil, fmt.Errorf("expected expression, got '%v' (col %d)", peek.Value, peek.Column)
}
}
// parse converts a token slice into a lambda calculus expression.
func parse(tokens []lambdaToken) (Expression, error) {
i := iterator.Of(tokens)
exp, err := parseExpression(i)
if err != nil {
return nil, err
}
if !i.Done() {
return nil, fmt.Errorf("expected EOF, found more tokens (col %d)", i.MustGet().Column)
}
return exp, nil
}

View File

@@ -1,27 +0,0 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/fifo"
func ReduceOnce(e *Expression) bool {
stack := fifo.New(e)
for !stack.Empty() {
top := stack.MustPop()
switch typed := (*top).(type) {
case *Abstraction:
stack.Push(&typed.Body)
case *Application:
if fn, fnOk := typed.Abstraction.(*Abstraction); fnOk {
Substitute(&fn.Body, fn.Parameter, typed.Argument)
*top = fn.Body
return true
}
stack.Push(&typed.Argument)
stack.Push(&typed.Abstraction)
}
}
return false
}

View File

@@ -1,19 +1,31 @@
package lambda package lambda
func Rename(e Expression, target string, substitute string) { import "fmt"
// Rename replaces all occurrences of the target variable name with the new name.
func Rename(e Expression, target string, newName string) Expression {
switch e := e.(type) { switch e := e.(type) {
case *Variable: case Variable:
if e.Value == target { if e.Name == target {
e.Value = substitute return Variable{Name: newName}
}
case *Abstraction:
if e.Parameter == target {
e.Parameter = substitute
} }
Rename(e.Body, target, substitute) return e
case *Application: case Abstraction:
Rename(e.Abstraction, target, substitute) newParam := e.Parameter
Rename(e.Argument, target, substitute) if e.Parameter == target {
newParam = newName
}
newBody := Rename(e.Body, target, newName)
return Abstraction{Parameter: newParam, Body: newBody}
case Application:
newAbs := Rename(e.Abstraction, target, newName)
newArg := Rename(e.Argument, target, newName)
return Application{Abstraction: newAbs, Argument: newArg}
default:
panic(fmt.Errorf("unknown expression type: %v", e))
} }
} }

18
pkg/lambda/scan.go Normal file
View File

@@ -0,0 +1,18 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/token"
// scanner is the declarative lexer for the lambda calculus.
var scanner = token.NewScanner(
token.On(`\(`, tokenOpenParen, 0),
token.On(`\)`, tokenCloseParen, 0),
token.On(`\\`, tokenSlash, 0),
token.On(`\.`, tokenDot, 0),
token.On(`[a-zA-Z0-9_]+`, tokenAtom, 0),
token.Skip[tokenType](`\s+`, 0),
)
// scan tokenizes an input string into lambda calculus tokens.
func scan(input string) ([]lambdaToken, error) {
return scanner.Scan(input)
}

View File

@@ -1,32 +1,17 @@
package lambda package lambda
import "strings" import "fmt"
type stringifyVisitor struct {
builder strings.Builder
}
func (v *stringifyVisitor) VisitVariable(a *Variable) {
v.builder.WriteString(a.Value)
}
func (v *stringifyVisitor) VisitAbstraction(f *Abstraction) {
v.builder.WriteRune('\\')
v.builder.WriteString(f.Parameter)
v.builder.WriteRune('.')
f.Body.Accept(v)
}
func (v *stringifyVisitor) VisitApplication(c *Application) {
v.builder.WriteRune('(')
c.Abstraction.Accept(v)
v.builder.WriteRune(' ')
c.Argument.Accept(v)
v.builder.WriteRune(')')
}
// Stringify turns an expression as a string.
func Stringify(e Expression) string { func Stringify(e Expression) string {
b := &stringifyVisitor{builder: strings.Builder{}} switch e := e.(type) {
e.Accept(b) case Variable:
return b.builder.String() return e.Name
case Abstraction:
return "\\" + e.Parameter + "." + Stringify(e.Body)
case Application:
return "(" + Stringify(e.Abstraction) + " " + Stringify(e.Argument) + ")"
default:
panic(fmt.Errorf("unknown expression type: %v", e))
}
} }

View File

@@ -1,27 +1,41 @@
package lambda package lambda
func Substitute(e *Expression, target string, replacement Expression) { import "fmt"
switch typed := (*e).(type) {
case *Variable: // Substitute replaces all free occurrences of the target variable with the
if typed.Value == target { // replacement expression. Alpha-renaming is performed automatically to
*e = replacement.Copy() // avoid variable capture.
} func Substitute(e Expression, target string, replacement Expression) Expression {
case *Abstraction: switch e := e.(type) {
if typed.Parameter == target { case Variable:
return if e.Name == target {
return replacement
} }
if IsFreeVariable(typed.Parameter, replacement) { return e
replacementFreeVars := GetFreeVariables(replacement) case Abstraction:
used := GetFreeVariables(typed.Body) if e.Parameter == target {
used.Merge(replacementFreeVars) return e
freshVar := GenerateFreshName(used)
Rename(typed, typed.Parameter, freshVar)
} }
Substitute(&typed.Body, target, replacement) body := e.Body
case *Application: param := e.Parameter
Substitute(&typed.Abstraction, target, replacement) if IsFree(replacement, param) {
Substitute(&typed.Argument, target, replacement) freeVars := GetFree(replacement)
freeVars.Merge(GetFree(body))
freshVar := GenerateFreshName(freeVars)
body = Rename(body, param, freshVar)
param = freshVar
}
newBody := Substitute(body, target, replacement)
return Abstraction{Parameter: param, Body: newBody}
case Application:
abs := Substitute(e.Abstraction, target, replacement)
arg := Substitute(e.Argument, target, replacement)
return Application{Abstraction: abs, Argument: arg}
default:
panic(fmt.Errorf("unknown expression type: %v", e))
} }
} }

45
pkg/lambda/token.go Normal file
View File

@@ -0,0 +1,45 @@
package lambda
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/token"
)
// A tokenType is an identifier for any token in the lambda calculus.
type tokenType int
// All official tokens of the lambda calculus.
const (
// tokenOpenParen denotes the '(' token.
tokenOpenParen tokenType = iota
// tokenCloseParen denotes the ')' token.
tokenCloseParen
// tokenSlash denotes the '\' token.
tokenSlash
// tokenDot denotes the '.' token.
tokenDot
// tokenAtom denotes an alpha-numeric variable.
tokenAtom
)
// Name returns the type of the tokenType, as a string.
func (t tokenType) Name() string {
switch t {
case tokenOpenParen:
return "("
case tokenCloseParen:
return ")"
case tokenSlash:
return "\\"
case tokenDot:
return "."
case tokenAtom:
return "ATOM"
default:
panic(fmt.Errorf("unknown token type %v", t))
}
}
// lambdaToken is the concrete token type for the lambda calculus.
type lambdaToken = token.Token[tokenType]

27
pkg/saccharine/codec.go Normal file
View File

@@ -0,0 +1,27 @@
package saccharine
import (
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a [codec.Codec] that serializes Saccharine expressions.
type Codec struct{}
// Decode parses a string as Saccharine source code. Returns an error
// if it cannot.
func (c Codec) Decode(s string) (Expression, error) {
tokens, err := scan(s)
if err != nil {
return nil, err
}
return parse(tokens)
}
// Encode turns a Saccharine expression into a string. Returns an error if it
// cannot.
func (c Codec) Encode(e Expression) (string, error) {
return stringifyExpression(e), nil
}
var _ codec.Codec[Expression] = (*Codec)(nil)

View File

@@ -1,49 +0,0 @@
package saccharine
type Expression interface {
IsExpression()
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
Parameters []string
Body Expression
}
type Application struct {
Abstraction Expression
Arguments []Expression
}
type Atom struct {
Name string
}
type Clause struct {
Statements []Statement
Returns Expression
}
func (Abstraction) IsExpression() {}
func (Application) IsExpression() {}
func (Atom) IsExpression() {}
func (Clause) IsExpression() {}
/** ------------------------------------------------------------------------- */
func NewAbstraction(parameter []string, body Expression) *Abstraction {
return &Abstraction{Parameters: parameter, Body: body}
}
func NewApplication(abstraction Expression, arguments []Expression) *Application {
return &Application{Abstraction: abstraction, Arguments: arguments}
}
func NewAtom(name string) *Atom {
return &Atom{Name: name}
}
func NewClause(statements []Statement, returns Expression) *Clause {
return &Clause{Statements: statements, Returns: returns}
}

View File

@@ -5,123 +5,89 @@ import (
"fmt" "fmt"
"git.maximhutz.com/max/lambda/pkg/iterator" "git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/saccharine/token" "git.maximhutz.com/max/lambda/pkg/token"
"git.maximhutz.com/max/lambda/pkg/trace"
) )
type TokenIterator = iterator.Iterator[token.Token] type tokenIterator = iterator.Iterator[Token]
func parseRawToken(i *TokenIterator, expected token.Type) (*token.Token, error) { func passSoftBreaks(i *tokenIterator) {
return iterator.Do(i, func(i *TokenIterator) (*token.Token, error) {
if tok, err := i.Next(); err != nil {
return nil, err
} else if tok.Type != expected {
return nil, fmt.Errorf("expected token %v, got %v'", token.Name(expected), tok.Value)
} else {
return &tok, nil
}
})
}
func passSoftBreaks(i *TokenIterator) {
for { for {
if _, err := parseRawToken(i, token.SoftBreak); err != nil { if _, err := token.ParseRawToken(i, TokenSoftBreak); err != nil {
return return
} }
} }
} }
func parseToken(i *TokenIterator, expected token.Type, ignoreSoftBreaks bool) (*token.Token, error) { func parseToken(i *tokenIterator, expected TokenType, ignoreSoftBreaks bool) (*Token, error) {
return iterator.Do(i, func(i *TokenIterator) (*token.Token, error) { if ignoreSoftBreaks {
if ignoreSoftBreaks { passSoftBreaks(i)
passSoftBreaks(i) }
}
return parseRawToken(i, expected) return token.ParseRawToken(i, expected)
})
} }
func parseString(i *TokenIterator) (string, error) { func parseString(i *tokenIterator) (string, error) {
if tok, err := parseToken(i, token.Atom, true); err != nil { if tok, err := parseToken(i, TokenAtom, true); err != nil {
return "", trace.Wrap(err, "no variable (col %d)", i.Index()) return "", fmt.Errorf("no variable (col %d): %w", i.Index(), err)
} else { } else {
return tok.Value, nil return tok.Value, nil
} }
} }
func parseBreak(i *TokenIterator) (*token.Token, error) { func parseBreak(i *tokenIterator) (*Token, error) {
if tok, softErr := parseRawToken(i, token.SoftBreak); softErr == nil { if tok, softErr := token.ParseRawToken(i, TokenSoftBreak); softErr == nil {
return tok, nil return tok, nil
} else if tok, hardErr := parseRawToken(i, token.HardBreak); hardErr == nil { } else if tok, hardErr := token.ParseRawToken(i, TokenHardBreak); hardErr == nil {
return tok, nil return tok, nil
} else { } else {
return nil, errors.Join(softErr, hardErr) return nil, errors.Join(softErr, hardErr)
} }
} }
func parseList[U any](i *TokenIterator, fn func(*TokenIterator) (U, error), minimum int) ([]U, error) { func parseAbstraction(i *tokenIterator) (*Abstraction, error) {
results := []U{} if _, err := parseToken(i, TokenSlash, true); err != nil {
return nil, fmt.Errorf("no function slash (col %d): %w", i.MustGet().Column, err)
for { } else if parameters, err := token.ParseList(i, parseString, 0); err != nil {
if u, err := fn(i); err != nil { return nil, err
if len(results) < minimum { } else if _, err = parseToken(i, TokenDot, true); err != nil {
return nil, trace.Wrap(err, "expected at least '%v' items, got only '%v'", minimum, len(results)) return nil, fmt.Errorf("no function dot (col %d): %w", i.MustGet().Column, err)
} } else if body, err := parseExpression(i); err != nil {
return results, nil return nil, err
} else {
results = append(results, u)
}
}
}
func parseAbstraction(i *TokenIterator) (*Abstraction, error) {
return iterator.Do(i, func(i *TokenIterator) (*Abstraction, error) {
if _, err := parseToken(i, token.Slash, true); err != nil {
return nil, trace.Wrap(err, "no function slash (col %d)", i.MustGet().Column)
} else if parameters, err := parseList(i, parseString, 0); err != nil {
return nil, err
} else if _, err = parseToken(i, token.Dot, true); err != nil {
return nil, trace.Wrap(err, "no function dot (col %d)", i.MustGet().Column)
} else if body, err := parseExpression(i); err != nil {
return nil, err
} else {
return NewAbstraction(parameters, body), nil
}
})
}
func parseApplication(i *TokenIterator) (*Application, error) {
return iterator.Do(i, func(i *TokenIterator) (*Application, error) {
if _, err := parseToken(i, token.OpenParen, true); err != nil {
return nil, trace.Wrap(err, "no openning brackets (col %d)", i.MustGet().Column)
} else if expressions, err := parseList(i, parseExpression, 1); err != nil {
return nil, err
} else if _, err := parseToken(i, token.CloseParen, true); err != nil {
return nil, trace.Wrap(err, "no closing brackets (col %d)", i.MustGet().Column)
} else {
return NewApplication(expressions[0], expressions[1:]), nil
}
})
}
func parseAtom(i *TokenIterator) (*Atom, error) {
if tok, err := parseToken(i, token.Atom, true); err != nil {
return nil, trace.Wrap(err, "no variable (col %d)", i.Index())
} else { } else {
return NewAtom(tok.Value), nil return &Abstraction{Parameters: parameters, Body: body}, nil
} }
} }
func parseStatements(i *TokenIterator) ([]Statement, error) { func parseApplication(i *tokenIterator) (*Application, error) {
if _, err := parseToken(i, TokenOpenParen, true); err != nil {
return nil, fmt.Errorf("no openning brackets (col %d): %w", i.MustGet().Column, err)
} else if expressions, err := token.ParseList(i, parseExpression, 1); err != nil {
return nil, err
} else if _, err := parseToken(i, TokenCloseParen, true); err != nil {
return nil, fmt.Errorf("no closing brackets (col %d): %w", i.MustGet().Column, err)
} else {
return &Application{Abstraction: expressions[0], Arguments: expressions[1:]}, nil
}
}
func parseAtom(i *tokenIterator) (*Atom, error) {
if tok, err := parseToken(i, TokenAtom, true); err != nil {
return nil, fmt.Errorf("no variable (col %d): %w", i.Index(), err)
} else {
return &Atom{Name: tok.Value}, nil
}
}
func parseStatements(i *tokenIterator) ([]Statement, error) {
statements := []Statement{} statements := []Statement{}
//nolint:errcheck //nolint:errcheck
parseList(i, parseBreak, 0) token.ParseList(i, parseBreak, 0)
for { for {
if statement, err := parseStatement(i); err != nil { if statement, err := parseStatement(i); err != nil {
break break
} else if _, err := parseList(i, parseBreak, 1); err != nil && !i.Done() { } else if _, err := token.ParseList(i, parseBreak, 1); err != nil && !i.Done() {
break break
} else { } else {
statements = append(statements, statement) statements = append(statements, statement)
@@ -131,9 +97,9 @@ func parseStatements(i *TokenIterator) ([]Statement, error) {
return statements, nil return statements, nil
} }
func parseClause(i *TokenIterator, braces bool) (*Clause, error) { func parseClause(i *tokenIterator, braces bool) (*Clause, error) {
if braces { if braces {
if _, err := parseToken(i, token.OpenBrace, true); err != nil { if _, err := parseToken(i, TokenOpenBrace, true); err != nil {
return nil, err return nil, err
} }
} }
@@ -152,59 +118,59 @@ func parseClause(i *TokenIterator, braces bool) (*Clause, error) {
} }
if braces { if braces {
if _, err := parseToken(i, token.CloseBrace, true); err != nil { if _, err := parseToken(i, TokenCloseBrace, true); err != nil {
return nil, err return nil, err
} }
} }
return NewClause(stmts[:len(stmts)-1], last.Value), nil return &Clause{Statements: stmts[:len(stmts)-1], Returns: last.Value}, nil
} }
func parseExpression(i *TokenIterator) (Expression, error) { func parseExpression(i *tokenIterator) (Expression, error) {
return iterator.Do(i, func(i *TokenIterator) (Expression, error) { passSoftBreaks(i)
passSoftBreaks(i)
switch peek := i.MustGet(); peek.Type { if i.Done() {
case token.OpenParen: return nil, fmt.Errorf("unexpected end of input")
return parseApplication(i) }
case token.Slash:
return parseAbstraction(i) switch peek := i.MustGet(); peek.Type {
case token.Atom: case TokenOpenParen:
return parseAtom(i) return parseApplication(i)
case token.OpenBrace: case TokenSlash:
return parseClause(i, true) return parseAbstraction(i)
default: case TokenAtom:
return nil, fmt.Errorf("expected expression, got '%v' (col %d)", peek.Value, peek.Column) return parseAtom(i)
} case TokenOpenBrace:
}) return parseClause(i, true)
default:
return nil, fmt.Errorf("expected expression, got '%v' (col %d)", peek.Value, peek.Column)
}
} }
func parseLet(i *TokenIterator) (*LetStatement, error) { func parseLet(i *tokenIterator) (*LetStatement, error) {
return iterator.Do(i, func(i *TokenIterator) (*LetStatement, error) { if parameters, err := token.ParseList(i, parseString, 1); err != nil {
if parameters, err := parseList(i, parseString, 1); err != nil { return nil, err
return nil, err } else if _, err := parseToken(i, TokenAssign, true); err != nil {
} else if _, err := parseToken(i, token.Assign, true); err != nil { return nil, err
return nil, err } else if body, err := parseExpression(i); err != nil {
} else if body, err := parseExpression(i); err != nil { return nil, err
return nil, err } else {
} else { return &LetStatement{Name: parameters[0], Parameters: parameters[1:], Body: body}, nil
return NewLet(parameters[0], parameters[1:], body), nil }
}
})
} }
func parseDeclare(i *TokenIterator) (*DeclareStatement, error) { func parseDeclare(i *tokenIterator) (*DeclareStatement, error) {
if value, err := parseExpression(i); err != nil { if value, err := parseExpression(i); err != nil {
return nil, err return nil, err
} else { } else {
return NewDeclare(value), nil return &DeclareStatement{Value: value}, nil
} }
} }
func parseStatement(i *TokenIterator) (Statement, error) { func parseStatement(i *tokenIterator) (Statement, error) {
if let, letErr := parseLet(i); letErr == nil { if let, letErr := iterator.Try(i, parseLet); letErr == nil {
return let, nil return let, nil
} else if declare, declErr := parseDeclare(i); declErr == nil { } else if declare, declErr := iterator.Try(i, parseDeclare); declErr == nil {
return declare, nil return declare, nil
} else { } else {
return nil, errors.Join(letErr, declErr) return nil, errors.Join(letErr, declErr)
@@ -212,7 +178,7 @@ func parseStatement(i *TokenIterator) (Statement, error) {
} }
// Given a list of tokens, attempt to parse it into an syntax tree. // Given a list of tokens, attempt to parse it into an syntax tree.
func parse(tokens []token.Token) (Expression, error) { func parse(tokens []Token) (Expression, error) {
i := iterator.Of(tokens) i := iterator.Of(tokens)
exp, err := parseClause(i, false) exp, err := parseClause(i, false)

View File

@@ -1,22 +1,60 @@
// Package "saccharine" provides a simple language built on top of λ-calculus, // Package saccharine defines the AST for the Saccharine language, a sugared
// to facilitate productive coding using it. // lambda calculus with let bindings and multi-statement clauses.
package saccharine package saccharine
import ( // An Expression is a node in the Saccharine abstract syntax tree.
"git.maximhutz.com/max/lambda/pkg/saccharine/token" // It is a sealed interface; only types in this package may implement it.
) type Expression interface {
expression()
// Convert a piece of valid saccharine code into an expression.
func Parse(code string) (Expression, error) {
tokens, err := token.Parse(code)
if err != nil {
return nil, err
}
return parse(tokens)
} }
// Convert a parsed saccharine expression back into source code. // An Abstraction is a lambda expression with zero or more parameters.
func Stringify(expression Expression) string { // A zero-parameter abstraction is treated as a thunk.
return stringifyExpression(expression) type Abstraction struct {
Parameters []string
Body Expression
} }
// An Application applies an expression to zero or more arguments.
type Application struct {
Abstraction Expression
Arguments []Expression
}
// An Atom is a named variable.
type Atom struct {
Name string
}
// A Clause is a sequence of statements followed by a return expression.
type Clause struct {
Statements []Statement
Returns Expression
}
func (Abstraction) expression() {}
func (Application) expression() {}
func (Atom) expression() {}
func (Clause) expression() {}
// A Statement is a declaration within a Clause.
// It is a sealed interface; only types in this package may implement it.
type Statement interface {
statement()
}
// A LetStatement binds a name (with optional parameters) to an expression.
type LetStatement struct {
Name string
Parameters []string
Body Expression
}
// A DeclareStatement evaluates an expression for its side effects within a
// clause.
type DeclareStatement struct {
Value Expression
}
func (LetStatement) statement() {}
func (DeclareStatement) statement() {}

24
pkg/saccharine/scan.go Normal file
View File

@@ -0,0 +1,24 @@
package saccharine
import "git.maximhutz.com/max/lambda/pkg/token"
// scanner is the declarative lexer for the Saccharine language.
var scanner = token.NewScanner(
token.On(`:=`, TokenAssign, 1),
token.On(`\(`, TokenOpenParen, 0),
token.On(`\)`, TokenCloseParen, 0),
token.On(`\{`, TokenOpenBrace, 0),
token.On(`\}`, TokenCloseBrace, 0),
token.On(`;`, TokenHardBreak, 0),
token.On(`\n`, TokenSoftBreak, 0),
token.On(`\\`, TokenSlash, 0),
token.On(`\.`, TokenDot, 0),
token.On(`[a-zA-Z0-9_]+`, TokenAtom, 0),
token.Skip[TokenType](`#[^\n]*`, 0),
token.Skip[TokenType](`[^\S\n]+`, 0),
)
// scan tokenizes a string into Saccharine tokens.
func scan(input string) ([]Token, error) {
return scanner.Scan(input)
}

View File

@@ -1,30 +0,0 @@
package saccharine
type Statement interface {
IsStatement()
}
/** ------------------------------------------------------------------------- */
type LetStatement struct {
Name string
Parameters []string
Body Expression
}
type DeclareStatement struct {
Value Expression
}
func (LetStatement) IsStatement() {}
func (DeclareStatement) IsStatement() {}
/** ------------------------------------------------------------------------- */
func NewLet(name string, parameters []string, body Expression) *LetStatement {
return &LetStatement{Name: name, Parameters: parameters, Body: body}
}
func NewDeclare(value Expression) *DeclareStatement {
return &DeclareStatement{Value: value}
}

65
pkg/saccharine/token.go Normal file
View File

@@ -0,0 +1,65 @@
package saccharine
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/token"
)
// A TokenType is an identifier for any token in the Saccharine language.
type TokenType int
// All official tokens of the Saccharine language.
const (
// TokenOpenParen denotes the '(' token.
TokenOpenParen TokenType = iota
// TokenCloseParen denotes the ')' token.
TokenCloseParen
// TokenOpenBrace denotes the '{' token.
TokenOpenBrace
// TokenCloseBrace denotes the '}' token.
TokenCloseBrace
// TokenHardBreak denotes the ';' token.
TokenHardBreak
// TokenAssign denotes the ':=' token.
TokenAssign
// TokenAtom denotes an alpha-numeric variable.
TokenAtom
// TokenSlash denotes the '\\' token.
TokenSlash
// TokenDot denotes the '.' token.
TokenDot
// TokenSoftBreak denotes a new-line.
TokenSoftBreak
)
// Name returns the type of the TokenType, as a string.
func (t TokenType) Name() string {
switch t {
case TokenOpenParen:
return "("
case TokenCloseParen:
return ")"
case TokenOpenBrace:
return "{"
case TokenCloseBrace:
return "}"
case TokenHardBreak:
return ";"
case TokenAssign:
return ":="
case TokenAtom:
return "ATOM"
case TokenSlash:
return "\\"
case TokenDot:
return "."
case TokenSoftBreak:
return "\\n"
default:
panic(fmt.Errorf("unknown token type %v", t))
}
}
// Token is the concrete token type for the Saccharine language.
type Token = token.Token[TokenType]

View File

@@ -1,115 +0,0 @@
package token
import (
"errors"
"fmt"
"unicode"
"git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/trace"
)
// isVariables determines whether a rune can be a valid variable.
func isVariable(r rune) bool {
return unicode.IsLetter(r) || unicode.IsNumber(r)
}
func parseRune(i *iterator.Iterator[rune], expected func(rune) bool) (rune, error) {
i2 := i.Copy()
if r, err := i2.Next(); err != nil {
return r, err
} else if !expected(r) {
return r, fmt.Errorf("got unexpected rune %v'", r)
} else {
i.Sync(i2)
return r, nil
}
}
func parseCharacter(i *iterator.Iterator[rune], expected rune) (rune, error) {
i2 := i.Copy()
if r, err := i2.Next(); err != nil {
return r, err
} else if r != expected {
return r, fmt.Errorf("got unexpected rune %v'", r)
} else {
i.Sync(i2)
return r, nil
}
}
// Pulls the next token from an iterator over runes. If it cannot, it will
// return nil. If an error occurs, it will return that.
func getToken(i *iterator.Iterator[rune]) (*Token, error) {
index := i.Index()
if i.Done() {
return nil, nil
}
letter, err := i.Next()
if err != nil {
return nil, trace.Wrap(err, "cannot produce next token")
}
switch {
case letter == '(':
return NewOpenParen(index), nil
case letter == ')':
return NewCloseParen(index), nil
case letter == '.':
return NewDot(index), nil
case letter == '\\':
return NewSlash(index), nil
case letter == '\n':
return NewSoftBreak(index), nil
case letter == '{':
return NewOpenBrace(index), nil
case letter == '}':
return NewCloseBrace(index), nil
case letter == ':':
if _, err := parseCharacter(i, '='); err != nil {
return nil, err
} else {
return NewAssign(index), nil
}
case letter == ';':
return NewHardBreak(index), nil
case unicode.IsSpace(letter):
return nil, nil
case isVariable(letter):
atom := []rune{letter}
for {
if r, err := parseRune(i, isVariable); err != nil {
break
} else {
atom = append(atom, r)
}
}
return NewAtom(string(atom), index), nil
}
return nil, fmt.Errorf("unknown character '%v'", string(letter))
}
// Parse a string into tokens.
func Parse(input string) ([]Token, error) {
i := iterator.Of([]rune(input))
tokens := []Token{}
errorList := []error{}
for !i.Done() {
token, err := getToken(i)
if err != nil {
errorList = append(errorList, err)
} else if token != nil {
tokens = append(tokens, *token)
}
}
return tokens, errors.Join(errorList...)
}

View File

@@ -1,91 +0,0 @@
package token
import "fmt"
// All tokens in the pseudo-lambda language.
type Type int
const (
OpenParen Type = iota // Denotes the '(' token.
CloseParen // Denotes the ')' token.
OpenBrace // Denotes the '{' token.
CloseBrace // Denotes the '}' token.
HardBreak // Denotes the ';' token.
Assign // Denotes the ':=' token.
Atom // Denotes an alpha-numeric variable.
Slash // Denotes the '/' token.
Dot // Denotes the '.' token.
SoftBreak // Denotes a new-line.
)
// A representation of a token in source code.
type Token struct {
Column int // Where the token begins in the source text.
Type Type // What type the token is.
Value string // The value of the token.
}
func NewOpenParen(column int) *Token {
return &Token{Type: OpenParen, Column: column, Value: "("}
}
func NewCloseParen(column int) *Token {
return &Token{Type: CloseParen, Column: column, Value: ")"}
}
func NewOpenBrace(column int) *Token {
return &Token{Type: OpenBrace, Column: column, Value: "{"}
}
func NewCloseBrace(column int) *Token {
return &Token{Type: CloseBrace, Column: column, Value: "}"}
}
func NewDot(column int) *Token {
return &Token{Type: Dot, Column: column, Value: "."}
}
func NewHardBreak(column int) *Token {
return &Token{Type: HardBreak, Column: column, Value: ";"}
}
func NewAssign(column int) *Token {
return &Token{Type: Assign, Column: column, Value: ":="}
}
func NewSlash(column int) *Token {
return &Token{Type: Slash, Column: column, Value: "\\"}
}
func NewAtom(name string, column int) *Token {
return &Token{Type: Atom, Column: column, Value: name}
}
func NewSoftBreak(column int) *Token {
return &Token{Type: SoftBreak, Column: column, Value: "\\n"}
}
func Name(typ Type) string {
switch typ {
case OpenParen:
return "("
case CloseParen:
return ")"
case Slash:
return "\\"
case Dot:
return "."
case Atom:
return "ATOM"
case SoftBreak:
return "\\n"
case HardBreak:
return ";"
default:
panic(fmt.Errorf("unknown token type %v", typ))
}
}
func (t Token) Name() string {
return Name(t.Type)
}

View File

@@ -1,29 +1,41 @@
// Package set defines a generic, mutable unordered set data structure.
package set package set
import "iter"
// A Set is an implementation of an mutable, unordered set. It uses a Golang map
// as its underlying data structure.
type Set[T comparable] map[T]bool type Set[T comparable] map[T]bool
func (s *Set[T]) Add(items ...T) { // Add appends a list of items into the set.
func (s Set[T]) Add(items ...T) {
for _, item := range items { for _, item := range items {
(*s)[item] = true s[item] = true
} }
} }
// Has returns true an item is present in the set.
func (s Set[T]) Has(item T) bool { func (s Set[T]) Has(item T) bool {
return s[item] return s[item]
} }
func (s *Set[T]) Remove(items ...T) { // Remove deletes a list of items from the set.
func (s Set[T]) Remove(items ...T) {
for _, item := range items { for _, item := range items {
delete(*s, item) delete(s, item)
} }
} }
func (s *Set[T]) Merge(o *Set[T]) { // Merge adds all items in the argument into the set. The argument is not
for item := range *o { // mutated.
func (s Set[T]) Merge(o Set[T]) {
for item := range o {
s.Add(item) s.Add(item)
} }
} }
// ToList returns all items present in the set, as a slice. The order of the
// items is not guaranteed.
func (s Set[T]) ToList() []T { func (s Set[T]) ToList() []T {
list := []T{} list := []T{}
@@ -34,8 +46,21 @@ func (s Set[T]) ToList() []T {
return list return list
} }
func New[T comparable](items ...T) *Set[T] { // Items returns a sequence of all items present in the set. The order of the
result := &Set[T]{} // items is not guaranteed.
func (s Set[T]) Items() iter.Seq[T] {
return func(yield func(T) bool) {
for item := range s {
if !yield(item) {
return
}
}
}
}
// New creates a set of all items as argument.
func New[T comparable](items ...T) Set[T] {
result := Set[T]{}
for _, item := range items { for _, item := range items {
result.Add(item) result.Add(item)

42
pkg/token/parse.go Normal file
View File

@@ -0,0 +1,42 @@
package token
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/iterator"
)
// ParseRawToken consumes the next token from the iterator if its type matches
// the expected type.
// Returns an error if the iterator is exhausted or the token type does not
// match.
func ParseRawToken[T Type](i *iterator.Iterator[Token[T]], expected T) (*Token[T], error) {
tok, err := i.Get()
if err != nil {
return nil, err
}
if tok.Type != expected {
return nil, fmt.Errorf("expected token '%v', got '%v'", expected.Name(), tok.Value)
}
i.Forward()
return &tok, nil
}
// ParseList repeatedly applies a parse function, collecting results into a
// slice.
// Stops when the parse function returns an error.
// Returns an error if fewer than minimum results are collected.
func ParseList[T Type, U any](i *iterator.Iterator[Token[T]], fn func(*iterator.Iterator[Token[T]]) (U, error), minimum int) ([]U, error) {
results := []U{}
for {
if u, err := fn(i); err != nil {
if len(results) < minimum {
return nil, fmt.Errorf("expected at least '%v' items, got only '%v': %w", minimum, len(results), err)
}
return results, nil
} else {
results = append(results, u)
}
}
}

129
pkg/token/scanner.go Normal file
View File

@@ -0,0 +1,129 @@
package token
import (
"errors"
"fmt"
"regexp"
"slices"
)
// A rule describes a single lexical pattern for the scanner.
type rule[T Type] struct {
pattern *regexp.Regexp
typ T
precedence int
skip bool
}
// compare orders rules by descending precedence.
func (r rule[T]) compare(other rule[T]) int {
return other.precedence - r.precedence
}
// An Option configures a Scanner during construction.
type Option[T Type] func(rules []rule[T]) []rule[T]
// On returns an option that registers a token-emitting rule.
// The token's value is the matched text.
// Higher precedence rules are tried first.
func On[T Type](pattern string, typ T, precedence int) Option[T] {
return func(rules []rule[T]) []rule[T] {
return append(rules, rule[T]{
pattern: compileAnchored(pattern),
typ: typ,
precedence: precedence,
})
}
}
// Skip returns an option that registers a non-emitting rule.
// This is used for whitespace and comments.
// Higher precedence rules are tried first.
func Skip[T Type](pattern string, precedence int) Option[T] {
return func(rules []rule[T]) []rule[T] {
return append(rules, rule[T]{
pattern: compileAnchored(pattern),
precedence: precedence,
skip: true,
})
}
}
// A Scanner is a declarative lexer built from a set of regex rules.
// Rules are sorted by precedence (highest first), with registration order as
// tiebreaker.
// At each position, the first matching rule wins.
type Scanner[T Type] struct {
rules []rule[T]
}
// NewScanner creates a Scanner by applying the given options and sorting the
// resulting rules by precedence.
func NewScanner[T Type](opts ...Option[T]) *Scanner[T] {
var rules []rule[T]
for _, opt := range opts {
rules = opt(rules)
}
slices.SortStableFunc(rules, rule[T].compare)
return &Scanner[T]{rules: rules}
}
// scanOne tries each rule at the current position and returns the first match.
// Returns the token (or nil if skipped) and the number of bytes consumed.
// Returns 0 if no rule matched.
func (s *Scanner[T]) scanOne(input string, pos int) (*Token[T], int) {
for _, r := range s.rules {
loc := r.pattern.FindStringIndex(input[pos:])
if loc == nil || loc[1] == 0 {
continue
}
if r.skip {
return nil, loc[1]
}
return &Token[T]{
Type: r.typ,
Value: input[pos : pos+loc[1]],
Column: pos,
}, loc[1]
}
return nil, 0
}
// Scan tokenizes the input string using the registered rules.
// At each position, rules are tried in precedence order and the first match
// wins.
// If no rule matches, an error is recorded and the scanner advances one byte.
func (s *Scanner[T]) Scan(input string) ([]Token[T], error) {
tokens := []Token[T]{}
errorList := []error{}
for pos := 0; pos < len(input); {
tok, n := s.scanOne(input, pos)
if n == 0 {
errorList = append(errorList, fmt.Errorf("unknown character '%v'", string(input[pos])))
pos++
continue
}
if tok != nil {
tokens = append(tokens, *tok)
}
pos += n
}
return tokens, errors.Join(errorList...)
}
// compileAnchored compiles a regex pattern, prepending \A so it only matches
// at the current scan position.
// Patterns must not be pre-anchored.
func compileAnchored(pattern string) *regexp.Regexp {
return regexp.MustCompile(`\A(?:` + pattern + `)`)
}

24
pkg/token/token.go Normal file
View File

@@ -0,0 +1,24 @@
// Package token provides generic token types and scanning/parsing primitives
// for building language-specific lexers and parsers.
package token
// A Type is a constraint for language-specific token type enums.
// It must be comparable (for equality checks) and must have a Name method
// that returns a human-readable string for error messages.
type Type interface {
comparable
// Name returns a human-readable name for this token type.
Name() string
}
// A Token is a lexical unit in a source language.
type Token[T Type] struct {
Column int // Where the token begins in the source text.
Type T // What type the token is.
Value string // The value of the token.
}
// Name returns the type of the Token, as a string.
func (t Token[T]) Name() string {
return t.Type.Name()
}

View File

@@ -1,25 +0,0 @@
package trace
import (
"errors"
"fmt"
"strings"
)
func Indent(s string, size int) string {
lines := strings.Lines(s)
indent := strings.Repeat(" ", size)
indented := ""
for line := range lines {
indented += indent + line
}
return indented
}
func Wrap(child error, format string, a ...any) error {
parent := fmt.Errorf(format, a...)
childErrString := Indent(child.Error(), 4)
return errors.New(parent.Error() + "\n" + childErrString)
}

View File

@@ -1,16 +0,0 @@
(\0.
(\inc.
(\add.
(\mult.
(\exp.
(exp (inc (inc (inc (inc 0)))) (inc (inc (inc (inc (inc 0))))))
\n m.(m n)
)
\m n f.(m (n f))
)
\n m.(m inc n)
)
\n f x.(f (n f x))
)
\f x.x
)

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,8 @@
0 := \f.\x.x 0 := \f.\x.x
inc n := \f x.(f (n f x)) inc n := \f x.(f (n f x))
exp n m := (m n) exp n m := (m n)
print n := (n F X)
N := (inc (inc (inc (inc (inc 0))))) N := (inc (inc (inc (inc (inc 0)))))
(exp N N) (print (exp N N))

1
tests/comments.expected Normal file
View File

@@ -0,0 +1 @@
VALUE

17
tests/comments.test Normal file
View File

@@ -0,0 +1,17 @@
# This is a full-line comment at the start
# The following defines the identity function
identity := \x.x # This is an end-of-line comment
# Define a simple function that applies a function twice
twice := \f.\x.(f
# Comments can be anywhere!
(f x))
# Test that comments don't interfere with expressions
result := (twice identity VALUE) # Should just return VALUE
# Multiple comments in a row
# can appear anywhere
# without breaking the code
result # Final comment at the end

View File

@@ -0,0 +1 @@
(0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (1 END)))))))))))))))))))))))))))))))

1
tests/list_2^30.expected Normal file
View File

@@ -0,0 +1 @@
(0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (0 (1 END)))))))))))))))))))))))))))))))

1
tests/thunk.expected Normal file
View File

@@ -0,0 +1 @@
VALUE