15 Commits

Author SHA1 Message Date
76ea6ea2cb feat: functional options pattern 2026-02-11 21:08:57 -05:00
3b7cf21eb7 feat: undo 2026-02-11 20:54:05 -05:00
b3f9f08c62 feat: scanner added 2026-02-11 20:28:29 -05:00
aca197ef51 refactor: simplify iterator.Try and remove unnecessary backtracking (#47)
## Description

`iterator.Try` previously copied the entire iterator and synced it back on success, causing an unnecessary heap allocation on every call.
This PR simplifies `Try` to save and restore the index directly, and removes the now-unused `Copy` and `Sync` methods.

- Rewrite `ScanRune` and `ParseRawToken` as peek-then-advance, eliminating the need for `Try` at leaf level.
- Remove redundant `Try` wrappers from `parseExpression`, `parseAbstraction`, `parseApplication`, `parseLet`, and `parseToken`, which are already disambiguated by their callers.
- Keep `Try` only where true backtracking is needed: `parseStatement`, which must choose between `parseLet` and `parseDeclare`.
- Fix pre-existing panic in saccharine `parseExpression` when the iterator is exhausted (added `Done()` guard).

### Decisions

- `Try` now operates on the original iterator instead of a copy, removing the confusing pattern where the callback's `i` was a different object than the caller's `i`.
- Removed `parseSoftBreak` and `parseHardBreak` helper functions since `ParseRawToken` no longer needs `Try` wrapping.

## Benefits

- Eliminates a heap allocation per `Try` call.
- Reduces nesting and indirection in all parse functions.
- Makes the code easier to follow by removing the shadow-`i` pattern.
- `Try` is now only used at genuine choice points in the grammar.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #47
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-12 01:04:26 +00:00
da3da70855 refactor: extract shared token package (#46)
## Description

Both the `saccharine` and `lambda` packages need tokenizing and parsing primitives.
This PR extracts shared token infrastructure into a new `pkg/token` package, then wires both languages up to use it.

- Add `pkg/token` with a generic `Token[T]` type, `Scan`, `ScanAtom`, `ScanRune`, `ScanCharacter`, `IsVariable`, `ParseRawToken`, and `ParseList`.
- Refactor `pkg/saccharine` to delegate to `pkg/token`, removing duplicated scanning and parsing helpers.
- Implement `Codec.Decode` for `pkg/lambda` (scanner + parser) using the shared token package.
- Add `iterator.While` for predicate-driven iteration.
- Rename `iterator.Do` to `iterator.Try` to better describe its rollback semantics.

### Decisions

- The `Type` constraint (`comparable` + `Name() string`) keeps the generic token flexible while ensuring every token type can produce readable error messages.
- `iterator.Do` was renamed to `iterator.Try` since it describes a try/rollback operation, not a side-effecting "do".

## Benefits

- Eliminates duplicated token, scanning, and parsing code between languages.
- Enables the `lambda` package to decode (parse) lambda calculus strings, which was previously unimplemented.
- Makes it straightforward to add new languages by reusing `pkg/token` primitives.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #46
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-12 00:25:18 +00:00
361f529bdc docs: document remaining packages and simplify AST types (#45)
## Summary

- Added doc comments across the codebase: `pkg/lambda`, `pkg/saccharine`, `pkg/codec`, `pkg/engine`, `pkg/iterator`, `pkg/set`, `pkg/convert`, `internal/registry`, and `cmd/lambda`.
- Made lambda and saccharine expression structs use public fields instead of getters, matching `go/ast` conventions.
- Removed superfluous constructors for saccharine and lambda expression/statement types in favor of struct literals.
- Consolidated saccharine token constructors into a single `NewToken` function.
- Removed the unused `trace` package.

## Test plan

- [x] `go build ./...` passes.
- [x] `go test ./...` passes.
- [ ] Verify `go doc` output renders correctly for documented packages.

Reviewed-on: #45
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-10 01:15:41 +00:00
1f486875fd style: rename repr to expr (#44)
## Description

The `Repr` type name was unclear — it was intended to represent a lambda calculus expression, not a "representation."
This PR renames `Repr` to `Expr` throughout the registry package for clarity.

- Rename `Repr` interface to `Expr` and `baseRepr` struct to `baseExpr`.
- Rename `repr.go` to `expr.go`.
- Rename `ID()` method to `Repr()` to indicate the representation type.
- Rename `NewRepr` constructor to `NewExpr`.
- Update all usages in codec, conversion, engine, process, and registry files.
- Add command aliases `conv` and `eng` for `convert` and `engine` subcommands.

## Benefits

- The naming better reflects the domain: an `Expr` is an expression, and `Repr()` returns its representation kind.
- Command aliases reduce typing for common subcommands.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #44
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 15:26:50 +00:00
bbe027e9f4 style: restructure cli and registry packages (#43)
## Description

The `internal/cli` package had grown to contain both CLI utilities (source/destination I/O) and registry-level abstractions (repr, conversion, engine, marshaler).
This PR separates concerns by moving registry types into `internal/registry` and keeping only CLI I/O types in `internal/cli`.
It also simplifies several core abstractions and aligns naming conventions.

- Move `Source`, `Destination` from `internal/config` to `internal/cli`.
- Move `Repr`, `Conversion`, `Engine`, `Process`, `Codec` from `internal/cli` to `internal/registry`.
- Rename "marshalers" to "codecs" throughout the codebase.
- Simplify `codec.Codec[T, U]` to `codec.Codec[T]` (string-based marshaling only).
- Add `codec.Conversion[T, U]` as a function type alias.
- Change `engine.Engine[T]` from an interface to a function type.
- Merge `Engine.Load()` + `Process.Set()` into a single `Engine.Load(Repr)` call.
- Convert `Saccharine2Lambda` from a struct to standalone conversion functions.
- Replace registry methods (`MustAddMarshaler`, `MustAddEngine`, `MustAddConversions`) with generic free functions (`RegisterCodec`, `RegisterEngine`, `RegisterConversion`).
- Remove unused `internal/config` package (`Config`, `GetLogger`, `ParseFromArgs`).
- Remove unused `pkg/emitter` package.
- Rename `Id()` to `ID()` per Go conventions.
- Add documentation comments and enable `checkPublicInterface` lint rule.
- Rename `reduce_one.go` to `reduce_once.go`.

### Decisions

- `Engine[T]` is now a function type (`func(T) (Process[T], error)`) rather than an interface, since the only method was `Load`.
- `Codec[T, U]` was split into `Codec[T]` (string marshaling) and `Conversion[T, U]` (type-to-type conversion function), which better reflects how they are actually used.
- Registration uses free generic functions (`RegisterCodec`, `RegisterEngine`, `RegisterConversion`) instead of methods on `Registry`, enabling type inference at the call site.

## Benefits

- Clearer separation of concerns between CLI I/O and the registry's internal type system.
- Simpler abstractions: fewer interfaces, fewer wrapper types, fewer indirections.
- Removing unused packages (`config`, `emitter`) reduces maintenance burden.
- Naming conventions (`ID`, codecs, `reduce_once`) are more idiomatic.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #43
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 05:39:32 +00:00
58d0823069 feat: rename --from/--to flags to --input/--output (#42)
## Description

The `convert` and `reduce` commands used `--from` and `--to` flags to specify input/output representations.
These names are ambiguous and don't clearly describe what they control.
This PR renames them to `--input`/`--output` and adds `-i`/`-o` short aliases for convenience.

- Rename `--from` to `--input` (`-i`) in `convert` and `reduce` commands.
- Rename `--to` to `--output` (`-o`) in the `convert` command.
- Switch from `StringVar` to `StringVarP` to support the new short flags.

## Benefits

- Flag names now clearly indicate they refer to representations, not file paths.
- Short aliases `-i` and `-o` make CLI usage more concise.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #42
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 04:31:57 +00:00
a3ee34732e refactor: rewrite CLI and internal architecture (#41)
## Description

The old architecture used a monolithic `main()` with a custom arg parser, an event-emitter-based runtime, and a plugin system for optional features.
This PR rewrites the CLI and internal architecture to be modular, extensible, and built around a registry of interchangeable components.

- Replace custom CLI arg parsing with Cobra subcommands (`convert`, `reduce`, `engine list`).
- Introduce a registry system (`internal/registry`) for marshalers, codecs, and engines, with BFS-based conversion path resolution.
- Add type-erased adapter layer (`internal/cli`) with `Repr`, `Engine`, `Process`, `Marshaler`, and `Conversion` interfaces wrapping generic `pkg/` types.
- Replace the event-emitter-based `Runtime` with a simpler `Engine`/`Process` model (`pkg/engine`).
- Add generic `Codec[T, U]` and `Marshaler[T]` interfaces (`pkg/codec`).
- Merge `saccharine/token` sub-package into `saccharine` and rename scanner functions from `parse*` to `scan*`.
- Make saccharine-to-lambda conversion bidirectional (encode and decode).
- Add `lambda.Marshaler` and `saccharine.Marshaler` implementing `codec.Marshaler`.
- Remove old infrastructure: `pkg/runtime`, `pkg/expr`, `internal/plugins`, `internal/statistics`.
- Add `make lint` target and update golangci-lint config.

### Decisions

- Cobra was chosen for the CLI framework to support nested subcommands and standard flag handling.
- The registry uses BFS to find conversion paths between representations, allowing multi-hop conversions without hardcoding routes.
- Type erasure via `cli.Repr` (wrapping `any`) enables the registry to work with heterogeneous types while keeping `pkg/` generics type-safe.
- The old plugin/event system was removed entirely rather than adapted, since the new `Process` model can support hooks differently in the future.

## Benefits

- Subcommands make the CLI self-documenting and easier to extend with new functionality.
- The registry pattern decouples representations, conversions, and engines, making it trivial to add new ones.
- BFS conversion routing means adding a single codec automatically enables transitive conversions.
- Simpler `Engine`/`Process` model reduces complexity compared to the event-emitter runtime.
- Consolidating the `token` sub-package reduces import depth and package sprawl.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [ ] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Reviewed-on: #41
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-02-07 03:25:32 +00:00
f2c8d9f7d2 fix: use loop variable instead of global ticker in GenerateFreshName (#40)
## Description

`GenerateFreshName` used a global `ticker` variable but never incremented it inside the loop.
This caused an infinite loop if the first generated name (`_0`) was already in the used set.

- Remove global `ticker` variable.
- Use loop variable `i` directly to generate candidate names.

## Benefits

- Fixes infinite loop bug when generated name collides with used set.
- Removes unnecessary global state.
- Simpler and more predictable behavior.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #40
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-18 20:58:23 +00:00
9c7fb8ceba refactor: rename interpreter to runtime and use receiver methods (#39)
## Description

The codebase previously used "interpreter" terminology and standalone functions for expression operations.
This PR modernizes the architecture by renaming to "runtime" and converting operations to receiver methods.

- Rename `pkg/interpreter` to `pkg/runtime`.
- Move `ReduceOnce` to new `pkg/normalorder` package for reduction strategy isolation.
- Convert standalone functions (`Substitute`, `Rename`, `GetFree`, `IsFree`) to receiver methods on concrete expression types.
- Change `Set` from pointer receivers to value receivers for simpler usage.
- Update all references from "interpreter" to "runtime" terminology throughout the codebase.

### Decisions

- Operations like `Substitute`, `Rename`, `GetFree`, and `IsFree` are now methods on the `Expression` interface, implemented by each concrete type (`Variable`, `Abstraction`, `Application`).
- The `normalorder` package isolates the normal-order reduction strategy, allowing future reduction strategies to be added in separate packages.
- `Set` uses value receivers since Go maps are reference types and don't require pointer semantics.

## Benefits

- Cleaner API: `expr.Substitute(target, replacement)` instead of `Substitute(expr, target, replacement)`.
- Better separation of concerns: reduction strategies are isolated from expression types.
- Consistent terminology: "runtime" better reflects the execution model.
- Simpler `Set` usage without needing to manage pointers.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`). Always use underscores.
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #39
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-18 20:52:34 +00:00
e85cf7ceff refactor: make lambda expression types immutable (#38)
## Summary

- Change Abstraction, Application, and Variable to use private fields with getter methods.
- Return value types instead of pointers from constructors.
- Update all type switches to match value types instead of pointer types.

## Test plan

- [x] All existing tests pass (`make test`).

Reviewed-on: #38
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 22:00:54 +00:00
c2aa77cb92 refactor: remove visitor pattern (#37)
## Description

The codebase previously used the visitor pattern for traversing lambda calculus expressions.
This was a hold-over from avoiding the Go-idiomatic way of handling types.
This PR removes the visitor pattern in favor of direct method implementations.

- Remove `Visitor` interface from `expression.go`.
- Remove `Accept` methods from `Abstraction`, `Application`, and `Variable`.
- Remove `Accept` from `Expression` interface.
- Delete `stringify.go` and move `String()` logic directly into each type.
- Add compile-time interface checks (`var _ Expression = (*Type)(nil)`).
- Update `expr.Expression` to embed `fmt.Stringer` instead of declaring `String() string`.

### Decisions

- Moved `String()` implementations directly into each expression type rather than using a separate recursive function, as each type's string representation is simple enough to be self-contained.

## Benefits

- Simpler, more idiomatic Go code using type methods instead of visitor pattern.
- Reduced indirection and fewer files to maintain.
- Compile-time interface satisfaction checks catch implementation errors early.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [ ] Documentation updated (if applicable).

Closes #36

Reviewed-on: #37
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 20:46:07 +00:00
52d40adcc6 chore: remove unused deltanet package (#35)
## Description

The `deltanet` package was an unused stub in the codebase.
This PR removes it to reduce clutter.

- Removed `pkg/deltanet/deltanet.go`.
- Removed `pkg/deltanet/node.go`.

Closes #34

## Benefits

- Reduces codebase complexity by removing unused code.
- Eliminates potential confusion from an incomplete stub package.

## Checklist

- [x] Code follows conventional commit format.
- [x] Branch follows naming convention (`<type>/<description>`).
- [x] Tests pass (if applicable).
- [x] Documentation updated (if applicable).

Reviewed-on: #35
Co-authored-by: M.V. Hutz <git@maximhutz.me>
Co-committed-by: M.V. Hutz <git@maximhutz.me>
2026-01-17 19:56:58 +00:00
85 changed files with 1739 additions and 2148 deletions

View File

@@ -1,6 +1,6 @@
---
name: "Bug Report"
about: "Report a bug or unexpected behavior in the lambda interpreter."
about: "Report a bug or unexpected behavior in the lambda runtime."
title: "fix: "
ref: "main"
assignees: []

View File

@@ -1,6 +1,6 @@
---
name: "Feature Request"
about: "Suggest a new feature or enhancement for the lambda interpreter."
about: "Suggest a new feature or enhancement for the lambda runtime."
title: "feat: "
ref: "main"
assignees: []

View File

@@ -48,7 +48,7 @@ linters:
# More information: https://golangci-lint.run/usage/false-positives/#comments
#
# Please uncomment the following line if your code is not using the godoc format
- comments
# - comments
# Common false positives
# feel free to remove this if you don't have any false positives
@@ -126,6 +126,9 @@ linters:
# Blank import should be only in a main or test package, or have a comment justifying it.
- name: blank-imports
# Packages should have comments of the form "Package x ...".
- name: package-comments
# context.Context() should be the first parameter of a function when provided as argument.
- name: context-as-argument
arguments:
@@ -157,6 +160,8 @@ linters:
arguments:
# make error messages clearer
- "sayRepetitiveInsteadOfStutters"
# require comments on public interface methods
- "checkPublicInterface"
# incrementing an integer variable by 1 is recommended to be done using the `++` operator
- name: increment-decrement

View File

@@ -48,7 +48,7 @@ The "source code" for a work means the preferred form of the work for making mod
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code runtime used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those
subprograms and other parts of the work.

View File

@@ -1,20 +1,21 @@
BINARY_NAME=lambda
TEST=simple
.PHONY: help build run profile explain graph docs test bench clean
.PHONY: help build run profile explain graph docs test bench lint clean
.DEFAULT_GOAL := help
.SILENT:
help:
echo "Available targets:"
echo " build - Build the lambda executable"
echo " run - Build and run the lambda interpreter (use TEST=<name> to specify sample)"
echo " run - Build and run the lambda runtime (use TEST=<name> to specify sample)"
echo " profile - Build and run with CPU profiling enabled"
echo " explain - Build and run with explanation mode and profiling"
echo " graph - Generate and open CPU profile visualization"
echo " docs - Start local godoc server on port 6060"
echo " test - Run tests for all samples"
echo " bench - Run benchmarks for all samples"
echo " lint - Run golangci-lint on all packages"
echo " clean - Remove all build artifacts"
build:
@@ -45,6 +46,9 @@ test:
bench:
go test -bench=. -benchtime=10x -cpu=4 ./cmd/lambda
lint:
go run github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest run ./...
clean:
rm -f ${BINARY_NAME}
rm -f program.out

View File

@@ -1,6 +1,6 @@
# lambda
Making a lambda calculus interpreter in Go.
Making a lambda calculus runtime in Go.
## Things to talk about

View File

@@ -1,85 +1,32 @@
// Package main defines the 'lambda' command-line interface (CLI).
package main
import (
"os"
"git.maximhutz.com/max/lambda/internal/cli"
"git.maximhutz.com/max/lambda/internal/config"
"git.maximhutz.com/max/lambda/internal/plugins"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/reducer"
"git.maximhutz.com/max/lambda/pkg/saccharine"
"github.com/spf13/cobra"
)
func main() {
// Parse CLI arguments.
options, err := config.FromArgs()
cli.HandleError(err)
logger := options.GetLogger()
logger.Info("using program arguments", "args", os.Args)
logger.Info("parsed CLI options", "options", options)
// Get input.
input, err := options.Source.Extract()
cli.HandleError(err)
// Parse code into syntax tree.
ast, err := saccharine.Parse(input)
cli.HandleError(err)
logger.Info("parsed syntax tree", "tree", ast)
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
logger.Info("compiled λ expression", "tree", compiled.String())
// Create reducer based on the selected interpreter.
var red reducer.Reducer
switch options.Interpreter {
case config.DeBruijnInterpreter:
dbExpr := convert.LambdaToDeBruijn(compiled)
logger.Info("converted to De Bruijn", "tree", dbExpr.String())
red = debruijn.NewNormalOrderReducer(&dbExpr)
default:
red = lambda.NewNormalOrderReducer(&compiled)
func Lambda() *cobra.Command {
cmd := &cobra.Command{
Use: "lambda",
Short: "Lambda calculus interpreter",
Long: "A lambda calculus interpreter supporting multiple representations.",
RunE: func(cmd *cobra.Command, _ []string) error {
return cmd.Help()
},
}
// If the user selected to track CPU performance, attach a profiler.
if options.Profile != "" {
plugins.NewPerformance(options.Profile, red)
}
cmd.AddCommand(LambdaConvert())
cmd.AddCommand(LambdaEngine())
cmd.AddCommand(LambdaReduce())
// If the user selected to produce a step-by-step explanation, attach an
// observer.
if options.Explanation {
plugins.NewExplanation(red)
}
// If the user opted to track statistics, attach a tracker.
if options.Statistics {
plugins.NewStatistics(red)
}
// If the user selected for verbose debug logs, attach a reduction tracker.
if options.Verbose {
plugins.NewLogs(logger, red)
}
// Run reduction.
red.Reduce()
// Return the final reduced result.
// For De Bruijn, convert back to lambda for consistent output.
var result string
if options.Interpreter == config.DeBruijnInterpreter {
dbExpr := red.Expression().(debruijn.Expression)
lambdaExpr := convert.DeBruijnToLambda(dbExpr)
result = lambdaExpr.String()
} else {
result = red.Expression().String()
}
err = options.Destination.Write(result)
cli.HandleError(err)
return cmd
}
func main() {
lambda := Lambda()
if err := lambda.Execute(); err != nil {
os.Exit(1)
}
}

View File

@@ -0,0 +1,95 @@
package main
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
)
// inferReprFromPath returns the repr type based on file extension.
func inferReprFromPath(path string) (string, error) {
switch ext := strings.ToLower(filepath.Ext(path)); ext {
case ".lambda", ".lam", ".lc":
return "lambda", nil
case ".saccharine", ".sch":
return "saccharine", nil
default:
return "", fmt.Errorf("unknown file extension '%s'", ext)
}
}
func LambdaConvert() *cobra.Command {
var inputReprFlag, outputReprFlag string
cmd := &cobra.Command{
Use: "convert <input-file> <output-file>",
Aliases: []string{"conv"},
Short: "Convert between lambda calculus representations",
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) != 2 {
return cmd.Help()
}
var err error
inputPath, outputPath := args[0], args[1]
// Use flag if provided, otherwise infer from extension.
inputRepr := inputReprFlag
if inputRepr == "" {
if inputRepr, err = inferReprFromPath(inputPath); err != nil {
return fmt.Errorf("input file: %w", err)
}
}
outputRepr := outputReprFlag
if outputRepr == "" {
if outputRepr, err = inferReprFromPath(outputPath); err != nil {
return fmt.Errorf("output file: %w", err)
}
}
// Read input file.
input, err := os.ReadFile(inputPath)
if err != nil {
return fmt.Errorf("reading input file: %w", err)
}
r := GetRegistry()
// Parse input into syntax tree.
repr, err := r.Unmarshal(string(input), inputRepr)
if err != nil {
return fmt.Errorf("parsing input: %w", err)
}
// Convert to output repr if different.
result, err := r.ConvertTo(repr, outputRepr)
if err != nil {
return fmt.Errorf("converting %s to %s: %w", inputRepr, outputRepr, err)
}
// Marshal output.
output, err := r.Marshal(result)
if err != nil {
return fmt.Errorf("unmarshaling output: %w", err)
}
// Write output file.
err = os.WriteFile(outputPath, []byte(output), 0644)
if err != nil {
return fmt.Errorf("writing output file: %w", err)
}
return nil
},
}
cmd.Flags().StringVarP(&inputReprFlag, "input", "i", "", "Input representation (inferred from extension if unset)")
cmd.Flags().StringVarP(&outputReprFlag, "output", "o", "", "Output representation (inferred from extension if unset)")
return cmd
}

View File

@@ -0,0 +1,20 @@
package main
import (
"github.com/spf13/cobra"
)
func LambdaEngine() *cobra.Command {
cmd := &cobra.Command{
Use: "engine",
Aliases: []string{"eng"},
Short: "Information about available engines",
RunE: func(cmd *cobra.Command, _ []string) error {
return cmd.Help()
},
}
cmd.AddCommand(LambdaEngineList())
return cmd
}

View File

@@ -0,0 +1,26 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
)
func LambdaEngineList() *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Aliases: []string{"ls"},
Short: "List available engines",
RunE: func(*cobra.Command, []string) error {
r := GetRegistry()
for engine := range r.ListEngines() {
fmt.Println(engine.Name())
}
return nil
},
}
return cmd
}

108
cmd/lambda/lambda_reduce.go Normal file
View File

@@ -0,0 +1,108 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
"git.maximhutz.com/max/lambda/internal/cli"
"git.maximhutz.com/max/lambda/internal/registry"
)
func LambdaReduce() *cobra.Command {
var inputReprFlag string
var engineFlag string
cmd := &cobra.Command{
Use: "reduce <input-file>",
Short: "Reduce a lambda calculus expression",
SilenceUsage: true,
Aliases: []string{"run"},
RunE: func(cmd *cobra.Command, args []string) error {
var err error
if len(args) != 1 {
return cmd.Help()
}
inputPath := args[0]
// Get input source.
var source cli.Source
if inputPath == "-" {
source = cli.StdinSource{}
} else {
source = cli.FileSource{Path: inputPath}
}
destination := cli.StdoutDestination{}
r := GetRegistry()
// Get input.
input, err := source.Extract()
if err != nil {
return err
}
// Use flag if provided, otherwise infer from extension.
inputRepr := inputReprFlag
if inputRepr == "" {
if inputRepr, err = inferReprFromPath(inputPath); err != nil {
return fmt.Errorf("input file: %w", err)
}
}
// Find engine.
var engine registry.Engine
if engineFlag == "" {
if engine, err = r.GetDefaultEngine(inputRepr); err != nil {
return err
}
} else {
if engine, err = r.GetEngine(engineFlag); err != nil {
return err
}
}
// Parse code into syntax tree.
repr, err := r.Unmarshal(input, inputRepr)
if err != nil {
return err
}
// Compile expression to lambda calculus.
compiled, err := r.ConvertTo(repr, "lambda")
if err != nil {
return err
}
// Create process.
process, err := engine.Load(compiled)
if err != nil {
return err
}
// Run reduction.
for process.Step(1) {
}
// Return the final reduced result.
result, err := process.Get()
if err != nil {
return err
}
output, err := r.Marshal(result)
if err != nil {
return err
}
return destination.Write(output)
},
}
cmd.Flags().StringVarP(&inputReprFlag, "input", "i", "", "Input representation (inferred from extension if unset)")
cmd.Flags().StringVarP(&engineFlag, "engine", "e", "", "Reduction engine (inferred from '--input' if unset)")
return cmd
}

View File

@@ -1,163 +0,0 @@
package main
import (
"os"
"path/filepath"
"strings"
"testing"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/saccharine"
"github.com/stretchr/testify/assert"
)
// Helper function to run a single sample through the lambda interpreter.
func runSample(samplePath string) (string, error) {
// Read the sample file.
input, err := os.ReadFile(samplePath)
if err != nil {
return "", err
}
// Parse code into syntax tree.
ast, err := saccharine.Parse(string(input))
if err != nil {
return "", err
}
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
// Create and run the reducer.
reducer := lambda.NewNormalOrderReducer(&compiled)
reducer.Reduce()
return reducer.Expression().String() + "\n", nil
}
// Helper function to run a single sample through the De Bruijn interpreter.
func runSampleDeBruijn(samplePath string) (string, error) {
// Read the sample file.
input, err := os.ReadFile(samplePath)
if err != nil {
return "", err
}
// Parse code into syntax tree.
ast, err := saccharine.Parse(string(input))
if err != nil {
return "", err
}
// Compile expression to lambda calculus.
compiled := convert.SaccharineToLambda(ast)
// Convert to De Bruijn and run reducer.
dbExpr := convert.LambdaToDeBruijn(compiled)
reducer := debruijn.NewNormalOrderReducer(&dbExpr)
reducer.Reduce()
// Convert back to lambda for output.
result := reducer.Expression().(debruijn.Expression)
lambdaResult := convert.DeBruijnToLambda(result)
return lambdaResult.String() + "\n", nil
}
// Test that all samples produce expected output with lambda interpreter.
func TestSamplesValidity(t *testing.T) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(t, err, "Failed to read tests directory.")
assert.NotEmpty(t, testFiles, "No '*.test' files found in directory.")
for _, testPath := range testFiles {
// Build expected file path.
expectedPath := strings.TrimSuffix(testPath, filepath.Ext(testPath)) + ".expected"
name := strings.TrimSuffix(filepath.Base(testPath), filepath.Ext(testPath))
t.Run(name, func(t *testing.T) {
// Run the sample and capture output.
actual, err := runSample(testPath)
assert.NoError(t, err, "Failed to run sample.")
// Read expected output.
expectedBytes, err := os.ReadFile(expectedPath)
assert.NoError(t, err, "Failed to read expected output.")
expected := string(expectedBytes)
// Compare outputs.
assert.Equal(t, expected, actual, "Output does not match expected.")
})
}
}
// Test that all samples produce expected output with De Bruijn interpreter.
func TestSamplesValidityDeBruijn(t *testing.T) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(t, err, "Failed to read tests directory.")
assert.NotEmpty(t, testFiles, "No '*.test' files found in directory.")
for _, testPath := range testFiles {
// Build expected file path.
expectedPath := strings.TrimSuffix(testPath, filepath.Ext(testPath)) + ".expected"
name := strings.TrimSuffix(filepath.Base(testPath), filepath.Ext(testPath))
t.Run(name, func(t *testing.T) {
// Run the sample and capture output.
actual, err := runSampleDeBruijn(testPath)
assert.NoError(t, err, "Failed to run sample.")
// Read expected output.
expectedBytes, err := os.ReadFile(expectedPath)
assert.NoError(t, err, "Failed to read expected output.")
expected := string(expectedBytes)
// Compare outputs.
assert.Equal(t, expected, actual, "Output does not match expected.")
})
}
}
// Benchmark all samples using sub-benchmarks.
func BenchmarkSamples(b *testing.B) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(b, err, "Failed to read tests directory.")
assert.NotEmpty(b, testFiles, "No '*.test' files found in directory.")
for _, path := range testFiles {
name := strings.TrimSuffix(filepath.Base(path), filepath.Ext(path))
b.Run(name, func(b *testing.B) {
for b.Loop() {
_, err := runSample(path)
assert.NoError(b, err, "Failed to run sample.")
}
})
}
}
// Benchmark all samples using De Bruijn interpreter.
func BenchmarkSamplesDeBruijn(b *testing.B) {
// Discover all .test files in the tests directory.
testFiles, err := filepath.Glob("../../tests/*.test")
assert.NoError(b, err, "Failed to read tests directory.")
assert.NotEmpty(b, testFiles, "No '*.test' files found in directory.")
for _, path := range testFiles {
name := strings.TrimSuffix(filepath.Base(path), filepath.Ext(path))
b.Run(name, func(b *testing.B) {
for b.Loop() {
_, err := runSampleDeBruijn(path)
assert.NoError(b, err, "Failed to run sample.")
}
})
}
}

26
cmd/lambda/registry.go Normal file
View File

@@ -0,0 +1,26 @@
package main
import (
"git.maximhutz.com/max/lambda/internal/registry"
"git.maximhutz.com/max/lambda/pkg/convert"
"git.maximhutz.com/max/lambda/pkg/engine/normalorder"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/saccharine"
)
func GetRegistry() *registry.Registry {
r := registry.New()
// Codecs
(registry.RegisterConversion(r, convert.Saccharine2Lambda, "saccharine", "lambda"))
(registry.RegisterConversion(r, convert.Lambda2Saccharine, "lambda", "saccharine"))
// Engines
(registry.RegisterEngine(r, normalorder.NewProcess, "normalorder", "lambda"))
// Marshalers
(registry.RegisterCodec(r, lambda.Codec{}, "lambda"))
(registry.RegisterCodec(r, saccharine.Codec{}, "saccharine"))
return r
}

7
go.mod
View File

@@ -2,10 +2,9 @@ module git.maximhutz.com/max/lambda
go 1.25.5
require github.com/stretchr/testify v1.11.1
require github.com/spf13/cobra v1.10.2
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
)

18
go.sum
View File

@@ -1,9 +1,11 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

2
internal/cli/cli.go Normal file
View File

@@ -0,0 +1,2 @@
// Package cli package provides various utilities to the 'lambda' program.
package cli

View File

@@ -1,27 +1,29 @@
package config
package cli
import (
"fmt"
"os"
)
// A method of writing output to the user.
// A Destination is method of writing output to the user.
type Destination interface {
// Write data to this destination.
Write(data string) error
}
// A destination writing to stdout.
// An StdoutDestination writes to stdout.
type StdoutDestination struct{}
// Write outputs to standard output.
func (d StdoutDestination) Write(data string) error {
fmt.Println(data)
return nil
}
// A destination writing to a file.
// A FileDestination writes to a file.
type FileDestination struct{ Path string }
// Write outputs to a file.
func (d FileDestination) Write(data string) error {
return os.WriteFile(d.Path, []byte(data+"\n"), 0644)
}

View File

@@ -1,18 +0,0 @@
// Package "cli" provides miscellaneous helper functions.
package cli
import (
"fmt"
"os"
)
// A helper function to handle errors in the program. If it is given an error,
// the program will exist, and print the error.
func HandleError(err error) {
if err == nil {
return
}
fmt.Fprintln(os.Stderr, "ERROR:", err)
os.Exit(1)
}

View File

@@ -1,24 +1,26 @@
package config
package cli
import (
"io"
"os"
)
// A method of extracting input from the user.
// A Source is a method of extracting input from the user.
type Source interface {
// Fetch data from this source.
// Extract fetches data from this source.
Extract() (string, error)
}
// A source defined by a string.
// A StringSource is defined by a string.
type StringSource struct{ Data string }
// Extract pulls input data from the internal string.
func (s StringSource) Extract() (string, error) { return s.Data, nil }
// A source pulling from standard input.
// A StdinSource pulls from standard input.
type StdinSource struct{}
// Extract pulls input data from standard input.
func (s StdinSource) Extract() (string, error) {
data, err := io.ReadAll(os.Stdin)
if err != nil {
@@ -28,9 +30,10 @@ func (s StdinSource) Extract() (string, error) {
return string(data), nil
}
// A source reading from a file.
// A FileSource reads from a file.
type FileSource struct{ Path string }
// Extract pulls input data from the file source.
func (s FileSource) Extract() (string, error) {
data, err := os.ReadFile(s.Path)
if err != nil {

View File

@@ -1,21 +0,0 @@
// Package "config" parses ad handles the user settings given to the program.
package config
// Interpreter specifies the reduction engine to use.
type Interpreter string
const (
LambdaInterpreter Interpreter = "lambda"
DeBruijnInterpreter Interpreter = "debruijn"
)
// Configuration settings for the program.
type Config struct {
Source Source // The source code given to the program.
Destination Destination // The destination for the final result.
Verbose bool // Whether or not to print debug logs.
Explanation bool // Whether or not to print an explanation of the reduction.
Profile string // If not nil, print a CPU profile during execution.
Statistics bool // Whether or not to print statistics.
Interpreter Interpreter // The interpreter engine to use.
}

View File

@@ -1,23 +0,0 @@
package config
import (
"log/slog"
"os"
)
// Returns a structured logger with the appropriate configurations.
func (c Config) GetLogger() *slog.Logger {
// By default, only print out errors.
level := slog.LevelError
// If the user set the output to be "VERBOSE", return the debug logs.
if c.Verbose {
level = slog.LevelInfo
}
return slog.New(
slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: level,
}),
)
}

View File

@@ -1,69 +0,0 @@
package config
import (
"flag"
"fmt"
)
// Extract the program configuration from the command-line arguments.
func FromArgs() (*Config, error) {
// Relevant flags.
verbose := flag.Bool("v", false, "Verbosity. If set, the program will print logs.")
explanation := flag.Bool("x", false, "Explanation. Whether or not to show all reduction steps.")
statistics := flag.Bool("s", false, "Statistics. If set, the process will print various statistics about the run.")
profile := flag.String("p", "", "CPU profiling. If an output file is defined, the program will profile its execution and dump its results into it.")
file := flag.String("f", "", "File. If set, read source from the specified file.")
output := flag.String("o", "", "Output. If set, write result to the specified file. Use '-' for stdout (default).")
interpreter := flag.String("i", "lambda", "Interpreter. The reduction engine to use: 'lambda' or 'debruijn'.")
flag.Parse()
// Validate interpreter flag.
var interpType Interpreter
switch *interpreter {
case "lambda":
interpType = LambdaInterpreter
case "debruijn":
interpType = DeBruijnInterpreter
default:
return nil, fmt.Errorf("invalid interpreter: %s (must be 'lambda' or 'debruijn')", *interpreter)
}
// Parse source type.
var source Source
if *file != "" {
// File flag takes precedence.
if flag.NArg() > 0 {
return nil, fmt.Errorf("cannot specify both -f flag and positional argument")
}
source = FileSource{Path: *file}
} else if flag.NArg() == 0 {
return nil, fmt.Errorf("no input given")
} else if flag.NArg() > 1 {
return nil, fmt.Errorf("more than 1 command-line argument")
} else {
// Positional argument.
if flag.Arg(0) == "-" {
source = StdinSource{}
} else {
source = StringSource{Data: flag.Arg(0)}
}
}
// Parse destination type.
var destination Destination
if *output == "" || *output == "-" {
destination = StdoutDestination{}
} else {
destination = FileDestination{Path: *output}
}
return &Config{
Source: source,
Destination: destination,
Verbose: *verbose,
Explanation: *explanation,
Profile: *profile,
Statistics: *statistics,
Interpreter: interpType,
}, nil
}

View File

@@ -1,23 +0,0 @@
package plugins
import (
"log/slog"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
type Logs struct {
logger *slog.Logger
reducer reducer.Reducer
}
func NewLogs(logger *slog.Logger, r reducer.Reducer) *Logs {
plugin := &Logs{logger, r}
r.On(reducer.StepEvent, plugin.Step)
return plugin
}
func (t *Logs) Step() {
t.logger.Info("reduction", "tree", t.reducer.Expression().String())
}

View File

@@ -1,31 +0,0 @@
// Package "explanation" provides an observer to gather the reasoning during the
// reduction, and present a thorough explanation to the user for each step.
package plugins
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// Track the reductions made by a reduction process.
type Explanation struct {
reducer reducer.Reducer
}
// Attaches a new explanation tracker to a reducer.
func NewExplanation(r reducer.Reducer) *Explanation {
plugin := &Explanation{reducer: r}
r.On(reducer.StartEvent, plugin.Start)
r.On(reducer.StepEvent, plugin.Step)
return plugin
}
func (t *Explanation) Start() {
fmt.Println(t.reducer.Expression().String())
}
func (t *Explanation) Step() {
fmt.Println(" =", t.reducer.Expression().String())
}

View File

@@ -1,59 +0,0 @@
// Package "performance" provides a tracker to observer CPU performance during
// execution.
package plugins
import (
"os"
"path/filepath"
"runtime/pprof"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// Observes a reduction process, and publishes a CPU performance profile on
// completion.
type Performance struct {
File string
filePointer *os.File
Error error
}
// Create a performance tracker that outputs a profile to "file".
func NewPerformance(file string, process reducer.Reducer) *Performance {
plugin := &Performance{File: file}
process.On(reducer.StartEvent, plugin.Start)
process.On(reducer.StopEvent, plugin.Stop)
return plugin
}
// Begin profiling.
func (t *Performance) Start() {
var absPath string
absPath, t.Error = filepath.Abs(t.File)
if t.Error != nil {
return
}
t.Error = os.MkdirAll(filepath.Dir(absPath), 0777)
if t.Error != nil {
return
}
t.filePointer, t.Error = os.Create(absPath)
if t.Error != nil {
return
}
t.Error = pprof.StartCPUProfile(t.filePointer)
if t.Error != nil {
return
}
}
// Stop profiling.
func (t *Performance) Stop() {
pprof.StopCPUProfile()
t.filePointer.Close()
}

View File

@@ -1,44 +0,0 @@
package plugins
import (
"fmt"
"os"
"time"
"git.maximhutz.com/max/lambda/internal/statistics"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// An observer, to track reduction performance.
type Statistics struct {
start time.Time
steps uint64
}
// Create a new reduction performance Statistics.
func NewStatistics(r reducer.Reducer) *Statistics {
plugin := &Statistics{}
r.On(reducer.StartEvent, plugin.Start)
r.On(reducer.StepEvent, plugin.Step)
r.On(reducer.StopEvent, plugin.Stop)
return plugin
}
func (t *Statistics) Start() {
t.start = time.Now()
t.steps = 0
}
func (t *Statistics) Step() {
t.steps++
}
func (t *Statistics) Stop() {
results := statistics.Results{
StepsTaken: t.steps,
TimeElapsed: uint64(time.Since(t.start).Milliseconds()),
}
fmt.Fprint(os.Stderr, results.String())
}

View File

@@ -0,0 +1,58 @@
package registry
import (
"fmt"
"reflect"
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a type-erased codec that serializes and deserializes expressions
// as Expr values, regardless of the underlying representation type.
type Codec interface {
codec.Codec[Expr]
// InType returns the name of the representation this codec handles.
InType() string
}
// A registeredCodec adapts a typed codec.Codec[T] into the type-erased Codec
// interface. It wraps decoded values into Expr on decode, and extracts the
// underlying T from an Expr on encode.
type registeredCodec[T any] struct {
codec codec.Codec[T]
inType string
}
func (c registeredCodec[T]) Decode(s string) (Expr, error) {
t, err := c.codec.Decode(s)
if err != nil {
return nil, err
}
return NewExpr(c.inType, t), nil
}
func (c registeredCodec[T]) Encode(r Expr) (string, error) {
t, ok := r.Data().(T)
if !ok {
dataType := reflect.TypeOf(r.Data())
allowedType := reflect.TypeFor[T]()
return "", fmt.Errorf("Codec for '%s' cannot parse '%s'", allowedType, dataType)
}
return c.codec.Encode(t)
}
func (c registeredCodec[T]) InType() string { return c.inType }
// RegisterCodec registers a typed codec under the given representation name.
// Returns an error if a codec for that representation is already registered.
func RegisterCodec[T any](registry *Registry, m codec.Codec[T], inType string) error {
if _, ok := registry.codecs[inType]; ok {
return fmt.Errorf("Codec for '%s' already registered", inType)
}
registry.codecs[inType] = registeredCodec[T]{m, inType}
return nil
}

View File

@@ -0,0 +1,59 @@
package registry
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Conversion is a type-erased transformation from one representation to
// another. It operates on Expr values, hiding the underlying representation
// types.
type Conversion interface {
// InType returns the name of the source representation.
InType() string
// OutType returns the name of the target representation.
OutType() string
// Run applies the conversion to the given expression. Returns an error if
// the expression's data does not match the expected source type.
Run(Expr) (Expr, error)
}
// A registeredConversion adapts a typed codec.Conversion[T, U] into the
// type-erased Conversion interface. It extracts the underlying T from an Expr,
// applies the conversion, and wraps the result as a new Expr.
type registeredConversion[T, U any] struct {
conversion codec.Conversion[T, U]
inType, outType string
}
func (c registeredConversion[T, U]) Run(expr Expr) (Expr, error) {
t, ok := expr.Data().(T)
if !ok {
return nil, fmt.Errorf("could not parse '%v' as '%s'", t, c.inType)
}
u, err := c.conversion(t)
if err != nil {
return nil, err
}
return NewExpr(c.outType, u), nil
}
func (c registeredConversion[T, U]) InType() string { return c.inType }
func (c registeredConversion[T, U]) OutType() string { return c.outType }
// RegisterConversion registers a typed conversion function between two
// representations.
func RegisterConversion[T, U any](
registry *Registry,
conversion codec.Conversion[T, U],
inType, outType string,
) error {
registry.converter.Add(registeredConversion[T, U]{conversion, inType, outType})
return nil
}

View File

@@ -0,0 +1,30 @@
package registry
// A Converter is a directed graph of conversions between representations. Each
// node is a representation name, and each edge is a Conversion.
type Converter struct {
data map[string][]Conversion
}
// NewConverter creates an empty Converter with no registered conversions.
func NewConverter() *Converter {
return &Converter{data: map[string][]Conversion{}}
}
// Add registers a conversion, adding an edge from its source representation
// to its target representation.
func (g *Converter) Add(c Conversion) {
conversionsFromIn, ok := g.data[c.InType()]
if !ok {
conversionsFromIn = []Conversion{}
}
conversionsFromIn = append(conversionsFromIn, c)
g.data[c.InType()] = conversionsFromIn
}
// ConversionsFrom returns all conversions that have the given representation
// as their source type.
func (g *Converter) ConversionsFrom(t string) []Conversion {
return g.data[t]
}

View File

@@ -0,0 +1,58 @@
package registry
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/engine"
)
// An Engine is a type-erased evaluation engine that can load an expression
// into a runnable Process.
type Engine interface {
// Load prepares an expression for evaluation, returning a Process. Returns
// an error if the expression's data does not match the engine's expected
// representation type.
Load(Expr) (Process, error)
// Name returns the name of this engine.
Name() string
// InType returns the name of the representation this engine operates on.
InType() string
}
// A registeredEngine adapts a typed engine.Engine[T] into the type-erased
// Engine interface. It extracts the underlying T from an Expr before passing it
// to the engine.
type registeredEngine[T any] struct {
engine engine.Engine[T]
name string
inType string
}
func (e registeredEngine[T]) InType() string { return e.inType }
func (e registeredEngine[T]) Name() string { return e.name }
func (e registeredEngine[T]) Load(expr Expr) (Process, error) {
t, ok := expr.Data().(T)
if !ok {
return nil, fmt.Errorf("incorrect format '%s' for engine '%s'", expr.Repr(), e.inType)
}
process, err := e.engine(t)
if err != nil {
return nil, err
}
return registeredProcess[T]{process, e.inType}, nil
}
// RegisterEngine registers a typed engine under the given name. Returns an
// error if an engine with that name is already registered.
func RegisterEngine[T any](registry *Registry, e engine.Engine[T], name, inType string) error {
if _, ok := registry.engines[name]; ok {
return fmt.Errorf("engine '%s' already registered", name)
}
registry.engines[name] = &registeredEngine[T]{e, name, inType}
return nil
}

26
internal/registry/expr.go Normal file
View File

@@ -0,0 +1,26 @@
package registry
// An Expr is a type-erased lambda calculus expression. It can have any type of
// representation, so long as that type is known to the registry it is handled
// by.
type Expr interface {
// Repr returns the name of the underlying representation. Two expressions
// with the same Repr() are assumed to have the same representation type.
Repr() string
// Data returns the underlying expression data.
Data() any
}
// A baseExpr is the default implementation of Expr.
type baseExpr struct {
id string
data any
}
func (e baseExpr) Repr() string { return e.id }
func (e baseExpr) Data() any { return e.data }
// NewExpr creates an Expr with the given representation name and data.
func NewExpr(id string, data any) Expr { return baseExpr{id, data} }

View File

@@ -0,0 +1,35 @@
package registry
import (
"git.maximhutz.com/max/lambda/pkg/engine"
)
// A Process is a type-erased reduction process that operates on Expr values.
type Process interface {
engine.Process[Expr]
// InType returns the name of the representation this process operates on.
InType() string
}
// A registeredProcess adapts a typed engine.Process[T] into the type-erased
// Process interface. It wraps the result of Get into an Expr.
type registeredProcess[T any] struct {
process engine.Process[T]
inType string
}
func (p registeredProcess[T]) InType() string { return p.inType }
func (p registeredProcess[T]) Get() (Expr, error) {
s, err := p.process.Get()
if err != nil {
return nil, err
}
return NewExpr(p.inType, s), nil
}
func (p registeredProcess[T]) Step(i int) bool {
return p.process.Step(i)
}

View File

@@ -0,0 +1,153 @@
// Package registry defines a structure to hold all available representations,
// engines, and conversions between them.
package registry
import (
"fmt"
"iter"
"maps"
)
// A Registry holds all representations, conversions, codecs, and engines
// available to the program.
type Registry struct {
codecs map[string]Codec
converter *Converter
engines map[string]Engine
}
// New makes an empty registry.
func New() *Registry {
return &Registry{
codecs: map[string]Codec{},
converter: NewConverter(),
engines: map[string]Engine{},
}
}
// GetEngine finds an engine based on its name. Returns an error if an engine
// with that name cannot be found.
func (r Registry) GetEngine(name string) (Engine, error) {
e, ok := r.engines[name]
if !ok {
return nil, fmt.Errorf("engine '%s' not found", name)
}
return e, nil
}
// ListEngines returns all available engines to the registry.
func (r Registry) ListEngines() iter.Seq[Engine] {
return maps.Values(r.engines)
}
// GetDefaultEngine infers the preferred engine for a representation. Returns an
// error if one cannot be chosen.
func (r *Registry) GetDefaultEngine(id string) (Engine, error) {
for _, engine := range r.engines {
if engine.InType() == id {
return engine, nil
}
}
return r.GetEngine("normalorder")
// return nil, fmt.Errorf("no engine for '%s'", id)
}
// ConvertTo attempts to convert an expression of one type of representation to
// another. Returns the converted expression, otherwise an error.
//
// It can convert between any two types of representations, given there is a
// valid conversion path between them. It uses BFS to traverse a graph of
// conversion edges, and converts along the shortest path.
func (r *Registry) ConvertTo(expr Expr, outType string) (Expr, error) {
path, err := r.ConversionPath(expr.Repr(), outType)
if err != nil {
return nil, err
}
result := expr
for _, conversion := range path {
result, err = conversion.Run(result)
if err != nil {
return nil, fmt.Errorf("converting '%s' to '%s': %w", conversion.InType(), conversion.OutType(), err)
}
}
return result, err
}
// Marshal serializes an expression, given that representation has a codec.
// Returns an error if the representation is not registered, or it has no codec.
func (r *Registry) Marshal(expr Expr) (string, error) {
m, ok := r.codecs[expr.Repr()]
if !ok {
return "", fmt.Errorf("no marshaler for '%s'", expr.Repr())
}
return m.Encode(expr)
}
// Unmarshal deserializes an expression. Returns an error if the representation
// or a codec for it is not registered.
func (r *Registry) Unmarshal(s string, outType string) (Expr, error) {
m, ok := r.codecs[outType]
if !ok {
return nil, fmt.Errorf("no marshaler for '%s'", outType)
}
return m.Decode(s)
}
func reverse[T any](list []T) []T {
if list == nil {
return list
}
reversed := []T{}
for i := len(list) - 1; i >= 0; i-- {
reversed = append(reversed, list[i])
}
return reversed
}
// ConversionPath attempts to find a set of valid conversions that (if applied)
// convert one representation to another. Returns an error if no path can be
// found.
func (r *Registry) ConversionPath(from, to string) ([]Conversion, error) {
backtrack := map[string]Conversion{}
iteration := []string{from}
for len(iteration) > 0 {
nextIteration := []string{}
for _, item := range iteration {
for _, conversion := range r.converter.ConversionsFrom(item) {
if _, ok := backtrack[conversion.OutType()]; ok {
continue
}
nextIteration = append(nextIteration, conversion.OutType())
backtrack[conversion.OutType()] = conversion
}
}
iteration = nextIteration
}
reversedPath := []Conversion{}
current := to
for current != from {
conversion, ok := backtrack[current]
if !ok {
return nil, fmt.Errorf("no valid conversion from '%s' to '%s'", from, to)
}
reversedPath = append(reversedPath, conversion)
current = conversion.InType()
}
return reverse(reversedPath), nil
}

View File

@@ -1,28 +0,0 @@
// Package "statistics" provides a way to observer reduction speed during
// execution.
package statistics
import (
"fmt"
"strings"
)
// Statistics for a specific reduction.
type Results struct {
StepsTaken uint64 // Number of steps taken during execution.
TimeElapsed uint64 // The time (ms) taken for execution to complete.
}
// Returns the average number of operations per second of the execution.
func (r Results) OpsPerSecond() float32 {
return float32(r.StepsTaken) / (float32(r.TimeElapsed) / 1000)
}
// Format the results as a string.
func (r Results) String() string {
builder := strings.Builder{}
fmt.Fprintln(&builder, "Time Spent:", r.TimeElapsed, "ms")
fmt.Fprintln(&builder, "Steps:", r.StepsTaken)
fmt.Fprintln(&builder, "Speed:", r.OpsPerSecond(), "ops")
return builder.String()
}

20
pkg/codec/codec.go Normal file
View File

@@ -0,0 +1,20 @@
// Package codec defines processes to convert between different representations
// of lambda calculus, and serialize the different representations.
package codec
// A Conversion is a function that turns one representation into another.
// Returns an error if the input expression cannot be converted.
type Conversion[T, U any] = func(T) (U, error)
// A Codec is an object that can serialize/deserialize one type of
// representation. It is assumed that for any x ∋ T, Decode(Encode(x)) = x.
type Codec[T any] interface {
// Encode takes an expression, and returns its serialized format, as a
// string. Returns an error if the expression cannot be serialized.
Encode(T) (string, error)
// Decode takes the serialized format of an expression, and returns its true
// value. Returns an error if the string doesn't correctly represent any
// valid expression.
Decode(string) (T, error)
}

View File

@@ -1,82 +0,0 @@
package convert
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
"git.maximhutz.com/max/lambda/pkg/set"
)
// DeBruijnToLambda converts a De Bruijn indexed expression back to named lambda calculus.
func DeBruijnToLambda(expr debruijn.Expression) lambda.Expression {
return deBruijnToLambdaWithContext(expr, []string{})
}
func deBruijnToLambdaWithContext(expr debruijn.Expression, context []string) lambda.Expression {
switch e := expr.(type) {
case *debruijn.Variable:
index := e.Index()
if index < len(context) {
// Bound variable: look up name in context.
name := context[len(context)-1-index]
return lambda.NewVariable(name)
}
// Free variable: use the label if available.
if e.Label() != "" {
return lambda.NewVariable(e.Label())
}
// Generate a name for free variables without labels.
return lambda.NewVariable(fmt.Sprintf("free%d", index))
case *debruijn.Abstraction:
// Generate a fresh parameter name.
used := collectUsedNames(e.Body(), context)
paramName := generateFreshName(used)
newContext := append(context, paramName)
body := deBruijnToLambdaWithContext(e.Body(), newContext)
return lambda.NewAbstraction(paramName, body)
case *debruijn.Application:
abs := deBruijnToLambdaWithContext(e.Abstraction(), context)
arg := deBruijnToLambdaWithContext(e.Argument(), context)
return lambda.NewApplication(abs, arg)
default:
panic("unknown expression type")
}
}
// collectUsedNames gathers all variable labels used in an expression.
func collectUsedNames(expr debruijn.Expression, context []string) *set.Set[string] {
used := set.New[string]()
for _, name := range context {
used.Add(name)
}
collectUsedNamesHelper(expr, used)
return used
}
func collectUsedNamesHelper(expr debruijn.Expression, used *set.Set[string]) {
switch e := expr.(type) {
case *debruijn.Variable:
if e.Label() != "" {
used.Add(e.Label())
}
case *debruijn.Abstraction:
collectUsedNamesHelper(e.Body(), used)
case *debruijn.Application:
collectUsedNamesHelper(e.Abstraction(), used)
collectUsedNamesHelper(e.Argument(), used)
}
}
// generateFreshName creates a fresh variable name not in the used set.
func generateFreshName(used *set.Set[string]) string {
for i := 0; ; i++ {
name := fmt.Sprintf("_%d", i)
if !used.Has(name) {
return name
}
}
}

View File

@@ -1,44 +0,0 @@
package convert
import (
"git.maximhutz.com/max/lambda/pkg/debruijn"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
// LambdaToDeBruijn converts a lambda calculus expression to De Bruijn indexed form.
// The context parameter tracks bound variables from outer abstractions.
func LambdaToDeBruijn(expr lambda.Expression) debruijn.Expression {
return lambdaToDeBruijnWithContext(expr, []string{})
}
func lambdaToDeBruijnWithContext(expr lambda.Expression, context []string) debruijn.Expression {
switch e := expr.(type) {
case *lambda.Variable:
name := e.Value()
// Search for the variable in the context (innermost to outermost).
for i := len(context) - 1; i >= 0; i-- {
if context[i] == name {
index := len(context) - 1 - i
return debruijn.NewVariable(index, name)
}
}
// Free variable: use a negative index to mark it.
// We encode free variables with index = len(context) + position.
// For simplicity, we use a large index that won't conflict.
return debruijn.NewVariable(len(context), name)
case *lambda.Abstraction:
// Add the parameter to the context.
newContext := append(context, e.Parameter())
body := lambdaToDeBruijnWithContext(e.Body(), newContext)
return debruijn.NewAbstraction(body)
case *lambda.Application:
abs := lambdaToDeBruijnWithContext(e.Abstraction(), context)
arg := lambdaToDeBruijnWithContext(e.Argument(), context)
return debruijn.NewApplication(abs, arg)
default:
panic("unknown expression type")
}
}

View File

@@ -1,3 +1,5 @@
// Package convert defined some standard conversions between various types of
// representations.
package convert
import (
@@ -7,41 +9,41 @@ import (
"git.maximhutz.com/max/lambda/pkg/saccharine"
)
func convertAtom(n *saccharine.Atom) lambda.Expression {
return lambda.NewVariable(n.Name)
func encodeAtom(n *saccharine.Atom) lambda.Expression {
return lambda.Variable{Name: n.Name}
}
func convertAbstraction(n *saccharine.Abstraction) lambda.Expression {
result := SaccharineToLambda(n.Body)
func encodeAbstraction(n *saccharine.Abstraction) lambda.Expression {
result := encodeExpression(n.Body)
parameters := n.Parameters
// If the function has no parameters, it is a thunk. Lambda calculus still
// requires _some_ parameter exists, so generate one.
if len(parameters) == 0 {
freeVars := lambda.GetFreeVariables(result)
freeVars := lambda.GetFree(result)
freshName := lambda.GenerateFreshName(freeVars)
parameters = append(parameters, freshName)
}
for i := len(parameters) - 1; i >= 0; i-- {
result = lambda.NewAbstraction(parameters[i], result)
result = lambda.Abstraction{Parameter: parameters[i], Body: result}
}
return result
}
func convertApplication(n *saccharine.Application) lambda.Expression {
result := SaccharineToLambda(n.Abstraction)
func encodeApplication(n *saccharine.Application) lambda.Expression {
result := encodeExpression(n.Abstraction)
arguments := []lambda.Expression{}
for _, argument := range n.Arguments {
convertedArgument := SaccharineToLambda(argument)
arguments = append(arguments, convertedArgument)
encodeedArgument := encodeExpression(argument)
arguments = append(arguments, encodeedArgument)
}
for _, argument := range arguments {
result = lambda.NewApplication(result, argument)
result = lambda.Application{Abstraction: result, Argument: argument}
}
return result
@@ -51,24 +53,24 @@ func reduceLet(s *saccharine.LetStatement, e lambda.Expression) lambda.Expressio
var value lambda.Expression
if len(s.Parameters) == 0 {
value = SaccharineToLambda(s.Body)
value = encodeExpression(s.Body)
} else {
value = convertAbstraction(saccharine.NewAbstraction(s.Parameters, s.Body))
value = encodeAbstraction(&saccharine.Abstraction{Parameters: s.Parameters, Body: s.Body})
}
return lambda.NewApplication(
lambda.NewAbstraction(s.Name, e),
value,
)
return lambda.Application{
Abstraction: lambda.Abstraction{Parameter: s.Name, Body: e},
Argument: value,
}
}
func reduceDeclare(s *saccharine.DeclareStatement, e lambda.Expression) lambda.Expression {
freshVar := lambda.GenerateFreshName(lambda.GetFreeVariables(e))
freshVar := lambda.GenerateFreshName(lambda.GetFree(e))
return lambda.NewApplication(
lambda.NewAbstraction(freshVar, e),
SaccharineToLambda(s.Value),
)
return lambda.Application{
Abstraction: lambda.Abstraction{Parameter: freshVar, Body: e},
Argument: encodeExpression(s.Value),
}
}
func reduceStatement(s saccharine.Statement, e lambda.Expression) lambda.Expression {
@@ -82,8 +84,8 @@ func reduceStatement(s saccharine.Statement, e lambda.Expression) lambda.Express
}
}
func convertClause(n *saccharine.Clause) lambda.Expression {
result := SaccharineToLambda(n.Returns)
func encodeClause(n *saccharine.Clause) lambda.Expression {
result := encodeExpression(n.Returns)
for i := len(n.Statements) - 1; i >= 0; i-- {
result = reduceStatement(n.Statements[i], result)
@@ -92,17 +94,45 @@ func convertClause(n *saccharine.Clause) lambda.Expression {
return result
}
func SaccharineToLambda(n saccharine.Expression) lambda.Expression {
switch n := n.(type) {
func encodeExpression(s saccharine.Expression) lambda.Expression {
switch s := s.(type) {
case *saccharine.Atom:
return convertAtom(n)
return encodeAtom(s)
case *saccharine.Abstraction:
return convertAbstraction(n)
return encodeAbstraction(s)
case *saccharine.Application:
return convertApplication(n)
return encodeApplication(s)
case *saccharine.Clause:
return convertClause(n)
return encodeClause(s)
default:
panic(fmt.Errorf("unknown expression type: %T", n))
panic(fmt.Errorf("unknown expression type: %T", s))
}
}
func decodeExression(l lambda.Expression) saccharine.Expression {
switch l := l.(type) {
case lambda.Variable:
return &saccharine.Atom{Name: l.Name}
case lambda.Abstraction:
return &saccharine.Abstraction{
Parameters: []string{l.Parameter},
Body: decodeExression(l.Body)}
case lambda.Application:
return &saccharine.Application{
Abstraction: decodeExression(l.Abstraction),
Arguments: []saccharine.Expression{decodeExression(l.Argument)}}
default:
panic(fmt.Errorf("unknown expression type: %T", l))
}
}
// Lambda2Saccharine converts a pure lambda calculus expression into its
// Saccharine counterpart.
func Lambda2Saccharine(l lambda.Expression) (saccharine.Expression, error) {
return decodeExression(l), nil
}
// Saccharine2Lambda desugars a saccharine expression into pure lambda calculus.
func Saccharine2Lambda(s saccharine.Expression) (lambda.Expression, error) {
return encodeExpression(s), nil
}

View File

@@ -1,119 +0,0 @@
// Package debruijn provides De Bruijn indexed lambda calculus expressions.
// De Bruijn indices eliminate the need for variable names by using numeric
// indices to refer to bound variables, avoiding capture issues during substitution.
package debruijn
import "git.maximhutz.com/max/lambda/pkg/expr"
// Expression is the interface for all De Bruijn indexed expression types.
// It embeds the general expr.Expression interface for cross-mode compatibility.
type Expression interface {
expr.Expression
Accept(Visitor)
}
/** ------------------------------------------------------------------------- */
// Abstraction represents a lambda abstraction without a named parameter.
// In De Bruijn notation, the parameter is implicit and referenced by index 0
// within the body.
type Abstraction struct {
body Expression
}
// Body returns the body of the abstraction.
func (a *Abstraction) Body() Expression {
return a.body
}
// Accept implements the Visitor pattern.
func (a *Abstraction) Accept(v Visitor) {
v.VisitAbstraction(a)
}
// String returns the De Bruijn notation string representation.
func (a *Abstraction) String() string {
return Stringify(a)
}
// NewAbstraction creates a new De Bruijn abstraction with the given body.
func NewAbstraction(body Expression) *Abstraction {
return &Abstraction{body: body}
}
/** ------------------------------------------------------------------------- */
// Application represents the application of one expression to another.
type Application struct {
abstraction Expression
argument Expression
}
// Abstraction returns the function expression being applied.
func (a *Application) Abstraction() Expression {
return a.abstraction
}
// Argument returns the argument expression.
func (a *Application) Argument() Expression {
return a.argument
}
// Accept implements the Visitor pattern.
func (a *Application) Accept(v Visitor) {
v.VisitApplication(a)
}
// String returns the De Bruijn notation string representation.
func (a *Application) String() string {
return Stringify(a)
}
// NewApplication creates a new application expression.
func NewApplication(abstraction Expression, argument Expression) *Application {
return &Application{abstraction: abstraction, argument: argument}
}
/** ------------------------------------------------------------------------- */
// Variable represents a De Bruijn indexed variable.
// The index indicates how many binders to skip to find the binding abstraction.
// The label is an optional hint for display purposes.
type Variable struct {
index int
label string
}
// Index returns the De Bruijn index.
func (v *Variable) Index() int {
return v.index
}
// Label returns the optional variable label.
func (v *Variable) Label() string {
return v.label
}
// Accept implements the Visitor pattern.
func (v *Variable) Accept(visitor Visitor) {
visitor.VisitVariable(v)
}
// String returns the De Bruijn notation string representation.
func (v *Variable) String() string {
return Stringify(v)
}
// NewVariable creates a new De Bruijn variable with the given index and label.
func NewVariable(index int, label string) *Variable {
return &Variable{index: index, label: label}
}
/** ------------------------------------------------------------------------- */
// Visitor interface for traversing De Bruijn expressions.
type Visitor interface {
VisitAbstraction(*Abstraction)
VisitApplication(*Application)
VisitVariable(*Variable)
}

View File

@@ -1,76 +0,0 @@
package debruijn
// Iterator provides depth-first traversal of De Bruijn expressions.
type Iterator struct {
trace []*Expression
}
// NewIterator creates a new iterator starting at the given expression.
func NewIterator(expr *Expression) *Iterator {
return &Iterator{[]*Expression{expr}}
}
// Done returns true when the iterator has finished traversal.
func (i *Iterator) Done() bool {
return len(i.trace) == 0
}
// Current returns a pointer to the current expression.
func (i *Iterator) Current() *Expression {
if i.Done() {
return nil
}
return i.trace[len(i.trace)-1]
}
// Parent returns a pointer to the parent expression.
func (i *Iterator) Parent() *Expression {
if len(i.trace) < 2 {
return nil
}
return i.trace[len(i.trace)-2]
}
// Swap replaces the current expression with the given expression.
func (i *Iterator) Swap(with Expression) {
current := i.Current()
if current != nil {
*current = with
}
}
// Back moves the iterator back to the parent expression.
func (i *Iterator) Back() bool {
if i.Done() {
return false
}
i.trace = i.trace[:len(i.trace)-1]
return true
}
// Next advances the iterator to the next expression in leftmost-outermost order.
func (i *Iterator) Next() {
switch typed := (*i.Current()).(type) {
case *Abstraction:
i.trace = append(i.trace, &typed.body)
case *Application:
i.trace = append(i.trace, &typed.abstraction)
case *Variable:
for len(i.trace) > 1 {
if app, ok := (*i.Parent()).(*Application); ok {
if app.abstraction == *i.Current() {
i.Back()
i.trace = append(i.trace, &app.argument)
return
}
}
i.Back()
}
i.trace = []*Expression{}
}
}

View File

@@ -1,66 +0,0 @@
package debruijn
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// NormalOrderReducer implements normal order (leftmost-outermost) reduction
// for De Bruijn indexed lambda calculus expressions.
type NormalOrderReducer struct {
emitter.BaseEmitter[reducer.Event]
expression *Expression
}
// NewNormalOrderReducer creates a new normal order reducer.
func NewNormalOrderReducer(expression *Expression) *NormalOrderReducer {
return &NormalOrderReducer{
BaseEmitter: *emitter.New[reducer.Event](),
expression: expression,
}
}
// Expression returns the current expression state.
func (r *NormalOrderReducer) Expression() expr.Expression {
return *r.expression
}
// isViable checks if an expression is a redex (reducible expression).
// A redex is an application of an abstraction to an argument.
func isViable(e *Expression) (*Abstraction, Expression, bool) {
if e == nil {
return nil, nil, false
} else if app, appOk := (*e).(*Application); !appOk {
return nil, nil, false
} else if fn, fnOk := app.abstraction.(*Abstraction); !fnOk {
return nil, nil, false
} else {
return fn, app.argument, true
}
}
// Reduce performs normal order reduction on a De Bruijn expression.
func (r *NormalOrderReducer) Reduce() {
r.Emit(reducer.StartEvent)
it := NewIterator(r.expression)
for !it.Done() {
if fn, arg, ok := isViable(it.Current()); !ok {
it.Next()
} else {
// Substitute arg for variable 0 in the body.
substituted := Substitute(fn.body, 0, Shift(arg, 1, 0))
// Shift down to account for the removed abstraction.
it.Swap(Shift(substituted, -1, 0))
r.Emit(reducer.StepEvent)
if _, _, ok := isViable(it.Parent()); ok {
it.Back()
}
}
}
r.Emit(reducer.StopEvent)
}

View File

@@ -1,32 +0,0 @@
package debruijn
// Shift increments all free variable indices in an expression by the given amount.
// A variable is free if its index is >= the cutoff (depth of nested abstractions).
// This is necessary when substituting an expression into a different binding context.
func Shift(expr Expression, amount int, cutoff int) Expression {
switch e := expr.(type) {
case *Variable:
if e.index >= cutoff {
return NewVariable(e.index+amount, e.label)
}
return e
case *Abstraction:
newBody := Shift(e.body, amount, cutoff+1)
if newBody == e.body {
return e
}
return NewAbstraction(newBody)
case *Application:
newAbs := Shift(e.abstraction, amount, cutoff)
newArg := Shift(e.argument, amount, cutoff)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

View File

@@ -1,35 +0,0 @@
package debruijn
import (
"strconv"
"strings"
)
type stringifyVisitor struct {
builder strings.Builder
}
func (v *stringifyVisitor) VisitVariable(a *Variable) {
v.builder.WriteString(strconv.Itoa(a.index))
}
func (v *stringifyVisitor) VisitAbstraction(f *Abstraction) {
v.builder.WriteRune('\\')
v.builder.WriteRune('.')
f.body.Accept(v)
}
func (v *stringifyVisitor) VisitApplication(c *Application) {
v.builder.WriteRune('(')
c.abstraction.Accept(v)
v.builder.WriteRune(' ')
c.argument.Accept(v)
v.builder.WriteRune(')')
}
// Stringify converts a De Bruijn expression to its string representation.
func Stringify(e Expression) string {
b := &stringifyVisitor{builder: strings.Builder{}}
e.Accept(b)
return b.builder.String()
}

View File

@@ -1,34 +0,0 @@
package debruijn
// Substitute replaces the variable at the given index with the replacement expression.
// The replacement is shifted appropriately as we descend into nested abstractions.
func Substitute(expr Expression, index int, replacement Expression) Expression {
switch e := expr.(type) {
case *Variable:
if e.index == index {
return replacement
}
return e
case *Abstraction:
// When entering an abstraction, increment the target index and shift the
// replacement to account for the new binding context.
shiftedReplacement := Shift(replacement, 1, 0)
newBody := Substitute(e.body, index+1, shiftedReplacement)
if newBody == e.body {
return e
}
return NewAbstraction(newBody)
case *Application:
newAbs := Substitute(e.abstraction, index, replacement)
newArg := Substitute(e.argument, index, replacement)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
default:
return expr
}
}

View File

@@ -1,6 +0,0 @@
// Package "deltanet" is a reduction strategy using ∆-nets.
package deltanet
type Graph struct {
Nodes []Node
}

View File

@@ -1,94 +0,0 @@
package deltanet
/** ------------------------------------------------------------------------- */
// A connection between exactly two nodes in a graph.
type Edge struct {
A, B Node
}
// Returns all nodes the edge is connected to.
func (e Edge) GetConnections() []Node { return []Node{e.A, e.B} }
// Determines if a node is connected via this edge.
func (e Edge) IsConnected(n Node) bool { return e.A == n || e.B == n }
// Swaps an edges connected with one node, for another.
func (e *Edge) Swap(from Node, to Node) {
if e.A == from {
e.A = to
}
if e.B == from {
e.B = to
}
}
// Returns true if the edge is connected to each node via their pricniple ports.
func (e Edge) IsPrincipleEdge() bool {
return e.A.GetMainPort() == e && e.B.GetMainPort() == e
}
/** ------------------------------------------------------------------------- */
type Node interface {
// Returns the principle port that the node is attached to.
GetMainPort() Edge
// Returns all auxiliary ports that the node has. These ports are guaranteed
// to be ordered clockwise, as they would appear graphically.
GetAuxPorts() []Edge
// Returns the label of the node. May be blank.
GetLabel() string
}
/** ------------------------------------------------------------------------- */
type EraserNode struct {
Main Edge
}
func (n EraserNode) GetLabel() string { return "Ⓧ" }
func (n EraserNode) GetMainPort() Edge { return n.Main }
func (n EraserNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */
type ReplicatorNode struct {
Main Edge
Level uint
Aux []Edge
Deltas []int
}
func (n ReplicatorNode) GetLabel() string { return "" }
func (n ReplicatorNode) GetMainPort() Edge { return n.Main }
func (n ReplicatorNode) GetAuxPorts() []Edge { return n.Aux }
// Returns the level of the replicator node.
func (n ReplicatorNode) GetLevel() uint { return n.Level }
/** ------------------------------------------------------------------------- */
type FanNode struct {
Label string
Main Edge
Left, Right Edge
}
func (n FanNode) GetLabel() string { return n.Label }
func (n FanNode) GetMainPort() Edge { return n.Main }
func (n FanNode) GetAuxPorts() []Edge { return []Edge{n.Left, n.Right} }
/** ------------------------------------------------------------------------- */
type TerminalNode struct {
Label string
Main Edge
}
func (n TerminalNode) GetLabel() string { return n.Label }
func (n TerminalNode) GetMainPort() Edge { return n.Main }
func (n TerminalNode) GetAuxPorts() []Edge { return []Edge{} }
/** ------------------------------------------------------------------------- */

View File

@@ -1,46 +0,0 @@
package emitter
import "git.maximhutz.com/max/lambda/pkg/set"
type Emitter[E comparable] interface {
On(E, func()) Listener[E]
Off(Listener[E])
Emit(E)
}
type BaseEmitter[E comparable] struct {
listeners map[E]*set.Set[Listener[E]]
}
func (e *BaseEmitter[E]) On(kind E, fn func()) Listener[E] {
if e.listeners[kind] == nil {
e.listeners[kind] = set.New[Listener[E]]()
}
listener := &BaseListener[E]{kind, fn}
e.listeners[kind].Add(listener)
return listener
}
func (e *BaseEmitter[E]) Off(listener Listener[E]) {
kind := listener.Kind()
if e.listeners[kind] != nil {
e.listeners[kind].Remove(listener)
}
}
func (e *BaseEmitter[E]) Emit(event E) {
if e.listeners[event] == nil {
e.listeners[event] = set.New[Listener[E]]()
}
for listener := range e.listeners[event].Items() {
listener.Run()
}
}
func New[E comparable]() *BaseEmitter[E] {
return &BaseEmitter[E]{
listeners: map[E]*set.Set[Listener[E]]{},
}
}

View File

@@ -1,19 +0,0 @@
package emitter
type Listener[E comparable] interface {
Kind() E
Run()
}
type BaseListener[E comparable] struct {
kind E
fn func()
}
func (l BaseListener[E]) Kind() E {
return l.kind
}
func (l BaseListener[E]) Run() {
l.fn()
}

18
pkg/engine/engine.go Normal file
View File

@@ -0,0 +1,18 @@
// Package engine defines a general process of reducing a lambda calculus
// expression.
package engine
// A Process handles the reduction of a single expression.
type Process[T any] interface {
// Get the current state of the process.
// Returns an error if the current state cannot be represented.
Get() (T, error)
// Step performs reduction(s) on the representation. If the number of steps
// defined is less than zero, it will perform as many reductions as
// possible. Returns whether a reduction was performed.
Step(int) bool
}
// An Engine is an function that generates reduction processes.
type Engine[T any] = func(T) (Process[T], error)

View File

@@ -0,0 +1,42 @@
// Package normalorder contains an engine that reduces a 'lambda.Expression'
// in the normal order.
package normalorder
import (
"git.maximhutz.com/max/lambda/pkg/engine"
"git.maximhutz.com/max/lambda/pkg/lambda"
)
type process struct {
expr lambda.Expression
}
func (e process) Get() (lambda.Expression, error) {
return e.expr, nil
}
func (e *process) Set(l lambda.Expression) error {
e.expr = l
return nil
}
func (e *process) Step(i int) bool {
for range i {
next, reduced := ReduceOnce(e.expr)
if !reduced {
return false
}
e.expr = next
}
return true
}
// NewProcess creates a new redution process.
func NewProcess(expression lambda.Expression) (engine.Process[lambda.Expression], error) {
return &process{expr: expression}, nil
}
var _ engine.Process[lambda.Expression] = (*process)(nil)
var _ engine.Engine[lambda.Expression] = NewProcess

View File

@@ -0,0 +1,39 @@
package normalorder
import "git.maximhutz.com/max/lambda/pkg/lambda"
// ReduceOnce attempts to apply a single reduction to a lambda expression.
// It returns (1) the final expression (reduced, or not), and (2) whether or not
// a reduction was applied.
//
// If a reduction is not applied, it returns the original expression.
func ReduceOnce(e lambda.Expression) (lambda.Expression, bool) {
switch e := e.(type) {
case lambda.Abstraction:
body, reduced := ReduceOnce(e.Body)
if reduced {
return lambda.Abstraction{Parameter: e.Parameter, Body: body}, true
}
return e, false
case lambda.Application:
if fn, fnOk := e.Abstraction.(lambda.Abstraction); fnOk {
return lambda.Substitute(fn.Body, fn.Parameter, e.Argument), true
}
abs, reduced := ReduceOnce(e.Abstraction)
if reduced {
return lambda.Application{Abstraction: abs, Argument: e.Argument}, true
}
arg, reduced := ReduceOnce(e.Argument)
if reduced {
return lambda.Application{Abstraction: e.Abstraction, Argument: arg}, true
}
return e, false
default:
return e, false
}
}

View File

@@ -1,11 +0,0 @@
// Package expr provides the abstract Expression interface for all evaluatable
// expression types in the lambda interpreter.
package expr
// Expression is the base interface for all evaluatable expression types.
// Different evaluation modes (lambda calculus, SKI combinators, typed lambda
// calculus, etc.) implement this interface with their own concrete types.
type Expression interface {
// String returns a human-readable representation of the expression.
String() string
}

View File

@@ -1,35 +1,25 @@
/*
Package "iterator"
*/
// Package iterator defines a generic way to iterator over a slice of data.
package iterator
import "fmt"
// An iterator over slices.
// An Iterator traverses over slices.
type Iterator[T any] struct {
items []T
index int
}
// Create a new iterator, over a set of items.
// Of creates a new iterator, over a set of defined items.
func Of[T any](items []T) *Iterator[T] {
return &Iterator[T]{items: items, index: 0}
}
// Returns the current position of the iterator.
// Index returns the current position of the iterator.
func (i Iterator[T]) Index() int {
return i.index
}
func (i Iterator[T]) Copy() *Iterator[T] {
return &Iterator[T]{items: i.items, index: i.index}
}
func (i *Iterator[T]) Sync(o *Iterator[T]) {
i.index = o.index
}
// Create a new iterator, over a set of items.
// Get returns the datum at the current position of the iterator.
func (i Iterator[T]) Get() (T, error) {
var null T
if i.Done() {
@@ -39,22 +29,26 @@ func (i Iterator[T]) Get() (T, error) {
return i.items[i.index], nil
}
// MustGet is a version of Get, that panics if the datum cannot be returned.
func (i Iterator[T]) MustGet() T {
var null T
if i.Done() {
return null
t, err := i.Get()
if err != nil {
panic(fmt.Errorf("cannot get current token: %w", err))
}
return i.items[i.index]
return t
}
// Forward increments the iterator if the iterator is not yet at the end of the
// slice.
func (i *Iterator[T]) Forward() {
if !i.Done() {
i.index++
}
}
// Create a new iterator, over a set of items.
// Next attempts to increment the iterator. Returns an error if it cannot be
// incremented.
func (i *Iterator[T]) Next() (T, error) {
item, err := i.Get()
if err == nil {
@@ -64,22 +58,37 @@ func (i *Iterator[T]) Next() (T, error) {
return item, err
}
// Create a new iterator, over a set of items.
// Back decrements the iterator. If the iterator is already at the beginning of
// the slice, this is a no-op.
func (i *Iterator[T]) Back() {
i.index = max(i.index-1, 0)
}
// Returns the current position of the iterator.
// Done returns whether the iterator is at the end of the slice or not.
func (i Iterator[T]) Done() bool {
return i.index == len(i.items)
}
func Do[T any, U any](i *Iterator[T], fn func(i *Iterator[T]) (U, error)) (U, error) {
i2 := i.Copy()
// While increments the iterator as long as the current item satisfies the
// predicate. The first item that does not match is left unconsumed.
func (i *Iterator[T]) While(fn func(T) bool) {
for !i.Done() {
if !fn(i.MustGet()) {
return
}
i.Forward()
}
}
out, err := fn(i2)
if err == nil {
i.Sync(i2)
// Try attempts to perform an operation using the iterator. If the operation
// succeeds, the iterator keeps its new position. If the operation fails, the
// iterator is rolled back, and an error is returned.
func Try[T any, U any](i *Iterator[T], fn func(i *Iterator[T]) (U, error)) (U, error) {
saved := i.index
out, err := fn(i)
if err != nil {
i.index = saved
}
return out, err

27
pkg/lambda/codec.go Normal file
View File

@@ -0,0 +1,27 @@
package lambda
import (
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a [codec.Codec] that serializes lambda calculus expressions.
type Codec struct{}
// Decode parses a string as lambda calculus.
// Returns an error if it cannot.
func (m Codec) Decode(s string) (Expression, error) {
tokens, err := scan(s)
if err != nil {
return nil, err
}
return parse(tokens)
}
// Encode turns a lambda calculus expression into a string.
// Returns an error if it cannot.
func (m Codec) Encode(e Expression) (string, error) {
return Stringify(e), nil
}
var _ codec.Codec[Expression] = (*Codec)(nil)

View File

@@ -1,94 +0,0 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/expr"
// Expression is the interface for all lambda calculus expression types.
// It embeds the general expr.Expression interface for cross-mode compatibility.
type Expression interface {
expr.Expression
Accept(Visitor)
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
parameter string
body Expression
}
func (a *Abstraction) Parameter() string {
return a.parameter
}
func (a *Abstraction) Body() Expression {
return a.body
}
func (a *Abstraction) Accept(v Visitor) {
v.VisitAbstraction(a)
}
func (a *Abstraction) String() string {
return Stringify(a)
}
func NewAbstraction(parameter string, body Expression) *Abstraction {
return &Abstraction{parameter: parameter, body: body}
}
/** ------------------------------------------------------------------------- */
type Application struct {
abstraction Expression
argument Expression
}
func (a *Application) Abstraction() Expression {
return a.abstraction
}
func (a *Application) Argument() Expression {
return a.argument
}
func (a *Application) Accept(v Visitor) {
v.VisitApplication(a)
}
func (a *Application) String() string {
return Stringify(a)
}
func NewApplication(abstraction Expression, argument Expression) *Application {
return &Application{abstraction: abstraction, argument: argument}
}
/** ------------------------------------------------------------------------- */
type Variable struct {
value string
}
func (v *Variable) Value() string {
return v.value
}
func (v *Variable) Accept(visitor Visitor) {
visitor.VisitVariable(v)
}
func (v *Variable) String() string {
return Stringify(v)
}
func NewVariable(name string) *Variable {
return &Variable{value: name}
}
/** ------------------------------------------------------------------------- */
type Visitor interface {
VisitAbstraction(*Abstraction)
VisitApplication(*Application)
VisitVariable(*Variable)
}

View File

@@ -6,7 +6,9 @@ import (
"git.maximhutz.com/max/lambda/pkg/set"
)
func GenerateFreshName(used *set.Set[string]) string {
// GenerateFreshName generates a variable name that is not in the used set.
// This function does not mutate the used set.
func GenerateFreshName(used set.Set[string]) string {
for i := uint64(0); ; i++ {
attempt := "_" + string(strconv.AppendUint(nil, i, 10))

View File

@@ -1,20 +1,27 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/set"
import (
"fmt"
func GetFreeVariables(e Expression) *set.Set[string] {
"git.maximhutz.com/max/lambda/pkg/set"
)
// GetFree returns the set of all free variable names in the expression.
// This function does not mutate the input expression.
// The returned set is newly allocated and can be modified by the caller.
func GetFree(e Expression) set.Set[string] {
switch e := e.(type) {
case *Variable:
return set.New(e.value)
case *Abstraction:
vars := GetFreeVariables(e.body)
vars.Remove(e.parameter)
case Variable:
return set.New(e.Name)
case Abstraction:
vars := GetFree(e.Body)
vars.Remove(e.Parameter)
return vars
case *Application:
vars := GetFreeVariables(e.abstraction)
vars.Merge(GetFreeVariables(e.argument))
case Application:
vars := GetFree(e.Abstraction)
vars.Merge(GetFree(e.Argument))
return vars
default:
return nil
panic(fmt.Errorf("unknown expression type: %v", e))
}
}

View File

@@ -1,14 +1,18 @@
package lambda
func IsFreeVariable(n string, e Expression) bool {
import "fmt"
// IsFree returns true if the variable name n occurs free in the expression.
// This function does not mutate the input expression.
func IsFree(e Expression, n string) bool {
switch e := e.(type) {
case *Variable:
return e.value == n
case *Abstraction:
return e.parameter != n && IsFreeVariable(n, e.body)
case *Application:
return IsFreeVariable(n, e.abstraction) || IsFreeVariable(n, e.argument)
case Variable:
return e.Name == n
case Abstraction:
return e.Parameter != n && IsFree(e.Body, n)
case Application:
return IsFree(e.Abstraction, n) || IsFree(e.Argument, n)
default:
return false
panic(fmt.Errorf("unknown expression type: %v", e))
}
}

View File

@@ -1,68 +0,0 @@
package lambda
type Iterator struct {
trace []*Expression
}
func NewIterator(expr *Expression) *Iterator {
return &Iterator{[]*Expression{expr}}
}
func (i *Iterator) Done() bool {
return len(i.trace) == 0
}
func (i *Iterator) Current() *Expression {
if i.Done() {
return nil
}
return i.trace[len(i.trace)-1]
}
func (i *Iterator) Parent() *Expression {
if len(i.trace) < 2 {
return nil
}
return i.trace[len(i.trace)-2]
}
func (i *Iterator) Swap(with Expression) {
current := i.Current()
if current != nil {
*current = with
}
}
func (i *Iterator) Back() bool {
if i.Done() {
return false
}
i.trace = i.trace[:len(i.trace)-1]
return true
}
func (i *Iterator) Next() {
switch typed := (*i.Current()).(type) {
case *Abstraction:
i.trace = append(i.trace, &typed.body)
case *Application:
i.trace = append(i.trace, &typed.abstraction)
case *Variable:
for len(i.trace) > 1 {
if app, ok := (*i.Parent()).(*Application); ok {
if app.abstraction == *i.Current() {
i.Back()
i.trace = append(i.trace, &app.argument)
return
}
}
i.Back()
}
i.trace = []*Expression{}
}
}

31
pkg/lambda/lambda.go Normal file
View File

@@ -0,0 +1,31 @@
// Package lambda defines the AST for the untyped lambda calculus.
package lambda
// An Expression is a node in the lambda calculus abstract syntax tree.
// It is a sealed interface; only types in this package may implement it.
type Expression interface {
expression()
}
// An Abstraction binds a single parameter over a body expression.
type Abstraction struct {
Parameter string
Body Expression
}
func (a Abstraction) expression() {}
// An Application applies an abstraction to a single argument.
type Application struct {
Abstraction Expression
Argument Expression
}
func (a Application) expression() {}
// A Variable is a named reference to a bound or free variable.
type Variable struct {
Name string
}
func (v Variable) expression() {}

80
pkg/lambda/parse.go Normal file
View File

@@ -0,0 +1,80 @@
package lambda
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/token"
)
type tokenIterator = iterator.Iterator[lambdaToken]
func parseVariable(i *tokenIterator) (Expression, error) {
if tok, err := token.ParseRawToken(i, tokenAtom); err != nil {
return nil, fmt.Errorf("expected variable (col %d): %w", i.Index(), err)
} else {
return Variable{Name: tok.Value}, nil
}
}
func parseAbstraction(i *tokenIterator) (Expression, error) {
if _, err := token.ParseRawToken(i, tokenSlash); err != nil {
return nil, fmt.Errorf("no backslash (col %d): %w", i.Index(), err)
} else if param, err := token.ParseRawToken(i, tokenAtom); err != nil {
return nil, fmt.Errorf("no param (col %d): %w", i.Index(), err)
} else if _, err := token.ParseRawToken(i, tokenDot); err != nil {
return nil, fmt.Errorf("no dot (col %d): %w", i.Index(), err)
} else if body, err := parseExpression(i); err != nil {
return nil, err
} else {
return Abstraction{Parameter: param.Value, Body: body}, nil
}
}
func parseApplication(i *tokenIterator) (Expression, error) {
if _, err := token.ParseRawToken(i, tokenOpenParen); err != nil {
return nil, fmt.Errorf("no opening paren (col %d): %w", i.Index(), err)
} else if abstraction, err := parseExpression(i); err != nil {
return nil, fmt.Errorf("expected function expression: %w", err)
} else if argument, err := parseExpression(i); err != nil {
return nil, fmt.Errorf("expected argument expression: %w", err)
} else if _, err := token.ParseRawToken(i, tokenCloseParen); err != nil {
return nil, fmt.Errorf("no closing paren (col %d): %w", i.Index(), err)
} else {
return Application{Abstraction: abstraction, Argument: argument}, nil
}
}
func parseExpression(i *tokenIterator) (Expression, error) {
peek, err := i.Get()
if err != nil {
return nil, err
}
switch peek.Type {
case tokenOpenParen:
return parseApplication(i)
case tokenSlash:
return parseAbstraction(i)
case tokenAtom:
return parseVariable(i)
default:
return nil, fmt.Errorf("expected expression, got '%v' (col %d)", peek.Value, peek.Column)
}
}
// parse converts a token slice into a lambda calculus expression.
func parse(tokens []lambdaToken) (Expression, error) {
i := iterator.Of(tokens)
exp, err := parseExpression(i)
if err != nil {
return nil, err
}
if !i.Done() {
return nil, fmt.Errorf("expected EOF, found more tokens (col %d)", i.MustGet().Column)
}
return exp, nil
}

View File

@@ -1,61 +0,0 @@
package lambda
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
"git.maximhutz.com/max/lambda/pkg/reducer"
)
// NormalOrderReducer implements normal order (leftmost-outermost) reduction
// for lambda calculus expressions.
type NormalOrderReducer struct {
emitter.BaseEmitter[reducer.Event]
expression *Expression
}
// NewNormalOrderReducer creates a new normal order reducer.
func NewNormalOrderReducer(expression *Expression) *NormalOrderReducer {
return &NormalOrderReducer{
BaseEmitter: *emitter.New[reducer.Event](),
expression: expression,
}
}
// Expression returns the current expression state.
func (r *NormalOrderReducer) Expression() expr.Expression {
return *r.expression
}
func isViable(e *Expression) (*Abstraction, Expression, bool) {
if e == nil {
return nil, nil, false
} else if app, appOk := (*e).(*Application); !appOk {
return nil, nil, false
} else if fn, fnOk := app.abstraction.(*Abstraction); !fnOk {
return nil, nil, false
} else {
return fn, app.argument, true
}
}
// Reduce performs normal order reduction on a lambda expression.
// The expression must be a lambda.Expression; other types are returned unchanged.
func (r *NormalOrderReducer) Reduce() {
r.Emit(reducer.StartEvent)
it := NewIterator(r.expression)
for !it.Done() {
if fn, arg, ok := isViable(it.Current()); !ok {
it.Next()
} else {
it.Swap(Substitute(fn.body, fn.parameter, arg))
r.Emit(reducer.StepEvent)
if _, _, ok := isViable(it.Parent()); ok {
it.Back()
}
}
}
r.Emit(reducer.StopEvent)
}

View File

@@ -1,38 +1,31 @@
package lambda
func Rename(expr Expression, target string, newName string) Expression {
switch e := expr.(type) {
case *Variable:
if e.value == target {
return NewVariable(newName)
}
return e
import "fmt"
case *Abstraction:
newParam := e.parameter
if e.parameter == target {
// Rename replaces all occurrences of the target variable name with the new name.
func Rename(e Expression, target string, newName string) Expression {
switch e := e.(type) {
case Variable:
if e.Name == target {
return Variable{Name: newName}
}
return e
case Abstraction:
newParam := e.Parameter
if e.Parameter == target {
newParam = newName
}
newBody := Rename(e.body, target, newName)
newBody := Rename(e.Body, target, newName)
if newParam == e.parameter && newBody == e.body {
return e
}
return NewAbstraction(newParam, newBody)
case *Application:
newAbs := Rename(e.abstraction, target, newName)
newArg := Rename(e.argument, target, newName)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
return Abstraction{Parameter: newParam, Body: newBody}
case Application:
newAbs := Rename(e.Abstraction, target, newName)
newArg := Rename(e.Argument, target, newName)
return Application{Abstraction: newAbs, Argument: newArg}
default:
return expr
panic(fmt.Errorf("unknown expression type: %v", e))
}
}

18
pkg/lambda/scan.go Normal file
View File

@@ -0,0 +1,18 @@
package lambda
import "git.maximhutz.com/max/lambda/pkg/token"
// scanner is the declarative lexer for the lambda calculus.
var scanner = token.NewScanner(
token.On(`\(`, tokenOpenParen, 0),
token.On(`\)`, tokenCloseParen, 0),
token.On(`\\`, tokenSlash, 0),
token.On(`\.`, tokenDot, 0),
token.On(`[a-zA-Z0-9_]+`, tokenAtom, 0),
token.Skip[tokenType](`\s+`, 0),
)
// scan tokenizes an input string into lambda calculus tokens.
func scan(input string) ([]lambdaToken, error) {
return scanner.Scan(input)
}

View File

@@ -1,32 +1,17 @@
package lambda
import "strings"
type stringifyVisitor struct {
builder strings.Builder
}
func (v *stringifyVisitor) VisitVariable(a *Variable) {
v.builder.WriteString(a.value)
}
func (v *stringifyVisitor) VisitAbstraction(f *Abstraction) {
v.builder.WriteRune('\\')
v.builder.WriteString(f.parameter)
v.builder.WriteRune('.')
f.body.Accept(v)
}
func (v *stringifyVisitor) VisitApplication(c *Application) {
v.builder.WriteRune('(')
c.abstraction.Accept(v)
v.builder.WriteRune(' ')
c.argument.Accept(v)
v.builder.WriteRune(')')
}
import "fmt"
// Stringify turns an expression as a string.
func Stringify(e Expression) string {
b := &stringifyVisitor{builder: strings.Builder{}}
e.Accept(b)
return b.builder.String()
switch e := e.(type) {
case Variable:
return e.Name
case Abstraction:
return "\\" + e.Parameter + "." + Stringify(e.Body)
case Application:
return "(" + Stringify(e.Abstraction) + " " + Stringify(e.Argument) + ")"
default:
panic(fmt.Errorf("unknown expression type: %v", e))
}
}

View File

@@ -1,46 +1,41 @@
package lambda
func Substitute(expr Expression, target string, replacement Expression) Expression {
switch e := expr.(type) {
case *Variable:
if e.value == target {
import "fmt"
// Substitute replaces all free occurrences of the target variable with the
// replacement expression. Alpha-renaming is performed automatically to
// avoid variable capture.
func Substitute(e Expression, target string, replacement Expression) Expression {
switch e := e.(type) {
case Variable:
if e.Name == target {
return replacement
}
return e
case *Abstraction:
if e.parameter == target {
return e
case Abstraction:
if e.Parameter == target {
return e
}
body := e.body
param := e.parameter
if IsFreeVariable(param, replacement) {
freeVars := GetFreeVariables(replacement)
freeVars.Merge(GetFreeVariables(body))
body := e.Body
param := e.Parameter
if IsFree(replacement, param) {
freeVars := GetFree(replacement)
freeVars.Merge(GetFree(body))
freshVar := GenerateFreshName(freeVars)
body = Rename(body, param, freshVar)
param = freshVar
}
newBody := Substitute(body, target, replacement)
if newBody == body && param == e.parameter {
return e
}
return NewAbstraction(param, newBody)
case *Application:
newAbs := Substitute(e.abstraction, target, replacement)
newArg := Substitute(e.argument, target, replacement)
if newAbs == e.abstraction && newArg == e.argument {
return e
}
return NewApplication(newAbs, newArg)
return Abstraction{Parameter: param, Body: newBody}
case Application:
abs := Substitute(e.Abstraction, target, replacement)
arg := Substitute(e.Argument, target, replacement)
return Application{Abstraction: abs, Argument: arg}
default:
return expr
panic(fmt.Errorf("unknown expression type: %v", e))
}
}

45
pkg/lambda/token.go Normal file
View File

@@ -0,0 +1,45 @@
package lambda
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/token"
)
// A tokenType is an identifier for any token in the lambda calculus.
type tokenType int
// All official tokens of the lambda calculus.
const (
// tokenOpenParen denotes the '(' token.
tokenOpenParen tokenType = iota
// tokenCloseParen denotes the ')' token.
tokenCloseParen
// tokenSlash denotes the '\' token.
tokenSlash
// tokenDot denotes the '.' token.
tokenDot
// tokenAtom denotes an alpha-numeric variable.
tokenAtom
)
// Name returns the type of the tokenType, as a string.
func (t tokenType) Name() string {
switch t {
case tokenOpenParen:
return "("
case tokenCloseParen:
return ")"
case tokenSlash:
return "\\"
case tokenDot:
return "."
case tokenAtom:
return "ATOM"
default:
panic(fmt.Errorf("unknown token type %v", t))
}
}
// lambdaToken is the concrete token type for the lambda calculus.
type lambdaToken = token.Token[tokenType]

View File

@@ -1,13 +0,0 @@
package reducer
// Event represents lifecycle events during reduction.
type Event int
const (
// StartEvent is emitted before reduction begins.
StartEvent Event = iota
// StepEvent is emitted after each reduction step.
StepEvent
// StopEvent is emitted after reduction completes.
StopEvent
)

View File

@@ -1,27 +0,0 @@
// Package reducer provides the abstract Reducer interface for all expression
// reduction strategies.
package reducer
import (
"git.maximhutz.com/max/lambda/pkg/emitter"
"git.maximhutz.com/max/lambda/pkg/expr"
)
// Reducer defines the interface for expression reduction strategies.
// Different evaluation modes (normal order, applicative order, SKI combinators,
// etc.) implement this interface with their own reduction logic.
//
// Reducers also implement the Emitter interface to allow plugins to observe
// reduction lifecycle events (Start, Step, Stop).
type Reducer interface {
emitter.Emitter[Event]
// Reduce performs all reduction steps on the expression.
// Emits StartEvent before reduction, StepEvent after each step, and
// StopEvent after completion.
// Returns the final reduced expression.
Reduce()
// Expression returns the current expression state.
Expression() expr.Expression
}

27
pkg/saccharine/codec.go Normal file
View File

@@ -0,0 +1,27 @@
package saccharine
import (
"git.maximhutz.com/max/lambda/pkg/codec"
)
// A Codec is a [codec.Codec] that serializes Saccharine expressions.
type Codec struct{}
// Decode parses a string as Saccharine source code. Returns an error
// if it cannot.
func (c Codec) Decode(s string) (Expression, error) {
tokens, err := scan(s)
if err != nil {
return nil, err
}
return parse(tokens)
}
// Encode turns a Saccharine expression into a string. Returns an error if it
// cannot.
func (c Codec) Encode(e Expression) (string, error) {
return stringifyExpression(e), nil
}
var _ codec.Codec[Expression] = (*Codec)(nil)

View File

@@ -1,49 +0,0 @@
package saccharine
type Expression interface {
IsExpression()
}
/** ------------------------------------------------------------------------- */
type Abstraction struct {
Parameters []string
Body Expression
}
type Application struct {
Abstraction Expression
Arguments []Expression
}
type Atom struct {
Name string
}
type Clause struct {
Statements []Statement
Returns Expression
}
func (Abstraction) IsExpression() {}
func (Application) IsExpression() {}
func (Atom) IsExpression() {}
func (Clause) IsExpression() {}
/** ------------------------------------------------------------------------- */
func NewAbstraction(parameter []string, body Expression) *Abstraction {
return &Abstraction{Parameters: parameter, Body: body}
}
func NewApplication(abstraction Expression, arguments []Expression) *Application {
return &Application{Abstraction: abstraction, Arguments: arguments}
}
func NewAtom(name string) *Atom {
return &Atom{Name: name}
}
func NewClause(statements []Statement, returns Expression) *Clause {
return &Clause{Statements: statements, Returns: returns}
}

View File

@@ -5,123 +5,89 @@ import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/saccharine/token"
"git.maximhutz.com/max/lambda/pkg/trace"
"git.maximhutz.com/max/lambda/pkg/token"
)
type TokenIterator = iterator.Iterator[token.Token]
type tokenIterator = iterator.Iterator[Token]
func parseRawToken(i *TokenIterator, expected token.Type) (*token.Token, error) {
return iterator.Do(i, func(i *TokenIterator) (*token.Token, error) {
if tok, err := i.Next(); err != nil {
return nil, err
} else if tok.Type != expected {
return nil, fmt.Errorf("expected token %v, got %v'", token.Name(expected), tok.Value)
} else {
return &tok, nil
}
})
}
func passSoftBreaks(i *TokenIterator) {
func passSoftBreaks(i *tokenIterator) {
for {
if _, err := parseRawToken(i, token.SoftBreak); err != nil {
if _, err := token.ParseRawToken(i, TokenSoftBreak); err != nil {
return
}
}
}
func parseToken(i *TokenIterator, expected token.Type, ignoreSoftBreaks bool) (*token.Token, error) {
return iterator.Do(i, func(i *TokenIterator) (*token.Token, error) {
func parseToken(i *tokenIterator, expected TokenType, ignoreSoftBreaks bool) (*Token, error) {
if ignoreSoftBreaks {
passSoftBreaks(i)
}
return parseRawToken(i, expected)
})
return token.ParseRawToken(i, expected)
}
func parseString(i *TokenIterator) (string, error) {
if tok, err := parseToken(i, token.Atom, true); err != nil {
return "", trace.Wrap(err, "no variable (col %d)", i.Index())
func parseString(i *tokenIterator) (string, error) {
if tok, err := parseToken(i, TokenAtom, true); err != nil {
return "", fmt.Errorf("no variable (col %d): %w", i.Index(), err)
} else {
return tok.Value, nil
}
}
func parseBreak(i *TokenIterator) (*token.Token, error) {
if tok, softErr := parseRawToken(i, token.SoftBreak); softErr == nil {
func parseBreak(i *tokenIterator) (*Token, error) {
if tok, softErr := token.ParseRawToken(i, TokenSoftBreak); softErr == nil {
return tok, nil
} else if tok, hardErr := parseRawToken(i, token.HardBreak); hardErr == nil {
} else if tok, hardErr := token.ParseRawToken(i, TokenHardBreak); hardErr == nil {
return tok, nil
} else {
return nil, errors.Join(softErr, hardErr)
}
}
func parseList[U any](i *TokenIterator, fn func(*TokenIterator) (U, error), minimum int) ([]U, error) {
results := []U{}
for {
if u, err := fn(i); err != nil {
if len(results) < minimum {
return nil, trace.Wrap(err, "expected at least '%v' items, got only '%v'", minimum, len(results))
}
return results, nil
} else {
results = append(results, u)
}
}
}
func parseAbstraction(i *TokenIterator) (*Abstraction, error) {
return iterator.Do(i, func(i *TokenIterator) (*Abstraction, error) {
if _, err := parseToken(i, token.Slash, true); err != nil {
return nil, trace.Wrap(err, "no function slash (col %d)", i.MustGet().Column)
} else if parameters, err := parseList(i, parseString, 0); err != nil {
func parseAbstraction(i *tokenIterator) (*Abstraction, error) {
if _, err := parseToken(i, TokenSlash, true); err != nil {
return nil, fmt.Errorf("no function slash (col %d): %w", i.MustGet().Column, err)
} else if parameters, err := token.ParseList(i, parseString, 0); err != nil {
return nil, err
} else if _, err = parseToken(i, token.Dot, true); err != nil {
return nil, trace.Wrap(err, "no function dot (col %d)", i.MustGet().Column)
} else if _, err = parseToken(i, TokenDot, true); err != nil {
return nil, fmt.Errorf("no function dot (col %d): %w", i.MustGet().Column, err)
} else if body, err := parseExpression(i); err != nil {
return nil, err
} else {
return NewAbstraction(parameters, body), nil
return &Abstraction{Parameters: parameters, Body: body}, nil
}
})
}
func parseApplication(i *TokenIterator) (*Application, error) {
return iterator.Do(i, func(i *TokenIterator) (*Application, error) {
if _, err := parseToken(i, token.OpenParen, true); err != nil {
return nil, trace.Wrap(err, "no openning brackets (col %d)", i.MustGet().Column)
} else if expressions, err := parseList(i, parseExpression, 1); err != nil {
func parseApplication(i *tokenIterator) (*Application, error) {
if _, err := parseToken(i, TokenOpenParen, true); err != nil {
return nil, fmt.Errorf("no openning brackets (col %d): %w", i.MustGet().Column, err)
} else if expressions, err := token.ParseList(i, parseExpression, 1); err != nil {
return nil, err
} else if _, err := parseToken(i, token.CloseParen, true); err != nil {
return nil, trace.Wrap(err, "no closing brackets (col %d)", i.MustGet().Column)
} else if _, err := parseToken(i, TokenCloseParen, true); err != nil {
return nil, fmt.Errorf("no closing brackets (col %d): %w", i.MustGet().Column, err)
} else {
return NewApplication(expressions[0], expressions[1:]), nil
}
})
}
func parseAtom(i *TokenIterator) (*Atom, error) {
if tok, err := parseToken(i, token.Atom, true); err != nil {
return nil, trace.Wrap(err, "no variable (col %d)", i.Index())
} else {
return NewAtom(tok.Value), nil
return &Application{Abstraction: expressions[0], Arguments: expressions[1:]}, nil
}
}
func parseStatements(i *TokenIterator) ([]Statement, error) {
func parseAtom(i *tokenIterator) (*Atom, error) {
if tok, err := parseToken(i, TokenAtom, true); err != nil {
return nil, fmt.Errorf("no variable (col %d): %w", i.Index(), err)
} else {
return &Atom{Name: tok.Value}, nil
}
}
func parseStatements(i *tokenIterator) ([]Statement, error) {
statements := []Statement{}
//nolint:errcheck
parseList(i, parseBreak, 0)
token.ParseList(i, parseBreak, 0)
for {
if statement, err := parseStatement(i); err != nil {
break
} else if _, err := parseList(i, parseBreak, 1); err != nil && !i.Done() {
} else if _, err := token.ParseList(i, parseBreak, 1); err != nil && !i.Done() {
break
} else {
statements = append(statements, statement)
@@ -131,9 +97,9 @@ func parseStatements(i *TokenIterator) ([]Statement, error) {
return statements, nil
}
func parseClause(i *TokenIterator, braces bool) (*Clause, error) {
func parseClause(i *tokenIterator, braces bool) (*Clause, error) {
if braces {
if _, err := parseToken(i, token.OpenBrace, true); err != nil {
if _, err := parseToken(i, TokenOpenBrace, true); err != nil {
return nil, err
}
}
@@ -152,59 +118,59 @@ func parseClause(i *TokenIterator, braces bool) (*Clause, error) {
}
if braces {
if _, err := parseToken(i, token.CloseBrace, true); err != nil {
if _, err := parseToken(i, TokenCloseBrace, true); err != nil {
return nil, err
}
}
return NewClause(stmts[:len(stmts)-1], last.Value), nil
return &Clause{Statements: stmts[:len(stmts)-1], Returns: last.Value}, nil
}
func parseExpression(i *TokenIterator) (Expression, error) {
return iterator.Do(i, func(i *TokenIterator) (Expression, error) {
func parseExpression(i *tokenIterator) (Expression, error) {
passSoftBreaks(i)
if i.Done() {
return nil, fmt.Errorf("unexpected end of input")
}
switch peek := i.MustGet(); peek.Type {
case token.OpenParen:
case TokenOpenParen:
return parseApplication(i)
case token.Slash:
case TokenSlash:
return parseAbstraction(i)
case token.Atom:
case TokenAtom:
return parseAtom(i)
case token.OpenBrace:
case TokenOpenBrace:
return parseClause(i, true)
default:
return nil, fmt.Errorf("expected expression, got '%v' (col %d)", peek.Value, peek.Column)
}
})
}
func parseLet(i *TokenIterator) (*LetStatement, error) {
return iterator.Do(i, func(i *TokenIterator) (*LetStatement, error) {
if parameters, err := parseList(i, parseString, 1); err != nil {
func parseLet(i *tokenIterator) (*LetStatement, error) {
if parameters, err := token.ParseList(i, parseString, 1); err != nil {
return nil, err
} else if _, err := parseToken(i, token.Assign, true); err != nil {
} else if _, err := parseToken(i, TokenAssign, true); err != nil {
return nil, err
} else if body, err := parseExpression(i); err != nil {
return nil, err
} else {
return NewLet(parameters[0], parameters[1:], body), nil
return &LetStatement{Name: parameters[0], Parameters: parameters[1:], Body: body}, nil
}
})
}
func parseDeclare(i *TokenIterator) (*DeclareStatement, error) {
func parseDeclare(i *tokenIterator) (*DeclareStatement, error) {
if value, err := parseExpression(i); err != nil {
return nil, err
} else {
return NewDeclare(value), nil
return &DeclareStatement{Value: value}, nil
}
}
func parseStatement(i *TokenIterator) (Statement, error) {
if let, letErr := parseLet(i); letErr == nil {
func parseStatement(i *tokenIterator) (Statement, error) {
if let, letErr := iterator.Try(i, parseLet); letErr == nil {
return let, nil
} else if declare, declErr := parseDeclare(i); declErr == nil {
} else if declare, declErr := iterator.Try(i, parseDeclare); declErr == nil {
return declare, nil
} else {
return nil, errors.Join(letErr, declErr)
@@ -212,7 +178,7 @@ func parseStatement(i *TokenIterator) (Statement, error) {
}
// Given a list of tokens, attempt to parse it into an syntax tree.
func parse(tokens []token.Token) (Expression, error) {
func parse(tokens []Token) (Expression, error) {
i := iterator.Of(tokens)
exp, err := parseClause(i, false)

View File

@@ -1,22 +1,60 @@
// Package "saccharine" provides a simple language built on top of λ-calculus,
// to facilitate productive coding using it.
// Package saccharine defines the AST for the Saccharine language, a sugared
// lambda calculus with let bindings and multi-statement clauses.
package saccharine
import (
"git.maximhutz.com/max/lambda/pkg/saccharine/token"
)
// Convert a piece of valid saccharine code into an expression.
func Parse(code string) (Expression, error) {
tokens, err := token.Parse(code)
if err != nil {
return nil, err
}
return parse(tokens)
// An Expression is a node in the Saccharine abstract syntax tree.
// It is a sealed interface; only types in this package may implement it.
type Expression interface {
expression()
}
// Convert a parsed saccharine expression back into source code.
func Stringify(expression Expression) string {
return stringifyExpression(expression)
// An Abstraction is a lambda expression with zero or more parameters.
// A zero-parameter abstraction is treated as a thunk.
type Abstraction struct {
Parameters []string
Body Expression
}
// An Application applies an expression to zero or more arguments.
type Application struct {
Abstraction Expression
Arguments []Expression
}
// An Atom is a named variable.
type Atom struct {
Name string
}
// A Clause is a sequence of statements followed by a return expression.
type Clause struct {
Statements []Statement
Returns Expression
}
func (Abstraction) expression() {}
func (Application) expression() {}
func (Atom) expression() {}
func (Clause) expression() {}
// A Statement is a declaration within a Clause.
// It is a sealed interface; only types in this package may implement it.
type Statement interface {
statement()
}
// A LetStatement binds a name (with optional parameters) to an expression.
type LetStatement struct {
Name string
Parameters []string
Body Expression
}
// A DeclareStatement evaluates an expression for its side effects within a
// clause.
type DeclareStatement struct {
Value Expression
}
func (LetStatement) statement() {}
func (DeclareStatement) statement() {}

24
pkg/saccharine/scan.go Normal file
View File

@@ -0,0 +1,24 @@
package saccharine
import "git.maximhutz.com/max/lambda/pkg/token"
// scanner is the declarative lexer for the Saccharine language.
var scanner = token.NewScanner(
token.On(`:=`, TokenAssign, 1),
token.On(`\(`, TokenOpenParen, 0),
token.On(`\)`, TokenCloseParen, 0),
token.On(`\{`, TokenOpenBrace, 0),
token.On(`\}`, TokenCloseBrace, 0),
token.On(`;`, TokenHardBreak, 0),
token.On(`\n`, TokenSoftBreak, 0),
token.On(`\\`, TokenSlash, 0),
token.On(`\.`, TokenDot, 0),
token.On(`[a-zA-Z0-9_]+`, TokenAtom, 0),
token.Skip[TokenType](`#[^\n]*`, 0),
token.Skip[TokenType](`[^\S\n]+`, 0),
)
// scan tokenizes a string into Saccharine tokens.
func scan(input string) ([]Token, error) {
return scanner.Scan(input)
}

View File

@@ -1,30 +0,0 @@
package saccharine
type Statement interface {
IsStatement()
}
/** ------------------------------------------------------------------------- */
type LetStatement struct {
Name string
Parameters []string
Body Expression
}
type DeclareStatement struct {
Value Expression
}
func (LetStatement) IsStatement() {}
func (DeclareStatement) IsStatement() {}
/** ------------------------------------------------------------------------- */
func NewLet(name string, parameters []string, body Expression) *LetStatement {
return &LetStatement{Name: name, Parameters: parameters, Body: body}
}
func NewDeclare(value Expression) *DeclareStatement {
return &DeclareStatement{Value: value}
}

65
pkg/saccharine/token.go Normal file
View File

@@ -0,0 +1,65 @@
package saccharine
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/token"
)
// A TokenType is an identifier for any token in the Saccharine language.
type TokenType int
// All official tokens of the Saccharine language.
const (
// TokenOpenParen denotes the '(' token.
TokenOpenParen TokenType = iota
// TokenCloseParen denotes the ')' token.
TokenCloseParen
// TokenOpenBrace denotes the '{' token.
TokenOpenBrace
// TokenCloseBrace denotes the '}' token.
TokenCloseBrace
// TokenHardBreak denotes the ';' token.
TokenHardBreak
// TokenAssign denotes the ':=' token.
TokenAssign
// TokenAtom denotes an alpha-numeric variable.
TokenAtom
// TokenSlash denotes the '\\' token.
TokenSlash
// TokenDot denotes the '.' token.
TokenDot
// TokenSoftBreak denotes a new-line.
TokenSoftBreak
)
// Name returns the type of the TokenType, as a string.
func (t TokenType) Name() string {
switch t {
case TokenOpenParen:
return "("
case TokenCloseParen:
return ")"
case TokenOpenBrace:
return "{"
case TokenCloseBrace:
return "}"
case TokenHardBreak:
return ";"
case TokenAssign:
return ":="
case TokenAtom:
return "ATOM"
case TokenSlash:
return "\\"
case TokenDot:
return "."
case TokenSoftBreak:
return "\\n"
default:
panic(fmt.Errorf("unknown token type %v", t))
}
}
// Token is the concrete token type for the Saccharine language.
type Token = token.Token[TokenType]

View File

@@ -1,130 +0,0 @@
package token
import (
"errors"
"fmt"
"unicode"
"git.maximhutz.com/max/lambda/pkg/iterator"
"git.maximhutz.com/max/lambda/pkg/trace"
)
// isVariables determines whether a rune can be a valid variable.
func isVariable(r rune) bool {
return unicode.IsLetter(r) || unicode.IsNumber(r)
}
func parseRune(i *iterator.Iterator[rune], expected func(rune) bool) (rune, error) {
i2 := i.Copy()
if r, err := i2.Next(); err != nil {
return r, err
} else if !expected(r) {
return r, fmt.Errorf("got unexpected rune %v'", r)
} else {
i.Sync(i2)
return r, nil
}
}
func parseCharacter(i *iterator.Iterator[rune], expected rune) (rune, error) {
i2 := i.Copy()
if r, err := i2.Next(); err != nil {
return r, err
} else if r != expected {
return r, fmt.Errorf("got unexpected rune %v'", r)
} else {
i.Sync(i2)
return r, nil
}
}
// Pulls the next token from an iterator over runes. If it cannot, it will
// return nil. If an error occurs, it will return that.
func getToken(i *iterator.Iterator[rune]) (*Token, error) {
index := i.Index()
if i.Done() {
return nil, nil
}
letter, err := i.Next()
if err != nil {
return nil, trace.Wrap(err, "cannot produce next token")
}
switch {
case letter == '(':
return NewOpenParen(index), nil
case letter == ')':
return NewCloseParen(index), nil
case letter == '.':
return NewDot(index), nil
case letter == '\\':
return NewSlash(index), nil
case letter == '\n':
return NewSoftBreak(index), nil
case letter == '{':
return NewOpenBrace(index), nil
case letter == '}':
return NewCloseBrace(index), nil
case letter == ':':
if _, err := parseCharacter(i, '='); err != nil {
return nil, err
} else {
return NewAssign(index), nil
}
case letter == ';':
return NewHardBreak(index), nil
case letter == '#':
// Skip everything until the next newline or EOF.
for !i.Done() {
r, err := i.Next()
if err != nil {
return nil, trace.Wrap(err, "error while parsing comment")
}
if r == '\n' {
// Put the newline back so it can be processed as a soft break.
i.Back()
break
}
}
return nil, nil
case unicode.IsSpace(letter):
return nil, nil
case isVariable(letter):
atom := []rune{letter}
for {
if r, err := parseRune(i, isVariable); err != nil {
break
} else {
atom = append(atom, r)
}
}
return NewAtom(string(atom), index), nil
}
return nil, fmt.Errorf("unknown character '%v'", string(letter))
}
// Parse a string into tokens.
func Parse(input string) ([]Token, error) {
i := iterator.Of([]rune(input))
tokens := []Token{}
errorList := []error{}
for !i.Done() {
token, err := getToken(i)
if err != nil {
errorList = append(errorList, err)
} else if token != nil {
tokens = append(tokens, *token)
}
}
return tokens, errors.Join(errorList...)
}

View File

@@ -1,91 +0,0 @@
package token
import "fmt"
// All tokens in the pseudo-lambda language.
type Type int
const (
OpenParen Type = iota // Denotes the '(' token.
CloseParen // Denotes the ')' token.
OpenBrace // Denotes the '{' token.
CloseBrace // Denotes the '}' token.
HardBreak // Denotes the ';' token.
Assign // Denotes the ':=' token.
Atom // Denotes an alpha-numeric variable.
Slash // Denotes the '/' token.
Dot // Denotes the '.' token.
SoftBreak // Denotes a new-line.
)
// A representation of a token in source code.
type Token struct {
Column int // Where the token begins in the source text.
Type Type // What type the token is.
Value string // The value of the token.
}
func NewOpenParen(column int) *Token {
return &Token{Type: OpenParen, Column: column, Value: "("}
}
func NewCloseParen(column int) *Token {
return &Token{Type: CloseParen, Column: column, Value: ")"}
}
func NewOpenBrace(column int) *Token {
return &Token{Type: OpenBrace, Column: column, Value: "{"}
}
func NewCloseBrace(column int) *Token {
return &Token{Type: CloseBrace, Column: column, Value: "}"}
}
func NewDot(column int) *Token {
return &Token{Type: Dot, Column: column, Value: "."}
}
func NewHardBreak(column int) *Token {
return &Token{Type: HardBreak, Column: column, Value: ";"}
}
func NewAssign(column int) *Token {
return &Token{Type: Assign, Column: column, Value: ":="}
}
func NewSlash(column int) *Token {
return &Token{Type: Slash, Column: column, Value: "\\"}
}
func NewAtom(name string, column int) *Token {
return &Token{Type: Atom, Column: column, Value: name}
}
func NewSoftBreak(column int) *Token {
return &Token{Type: SoftBreak, Column: column, Value: "\\n"}
}
func Name(typ Type) string {
switch typ {
case OpenParen:
return "("
case CloseParen:
return ")"
case Slash:
return "\\"
case Dot:
return "."
case Atom:
return "ATOM"
case SoftBreak:
return "\\n"
case HardBreak:
return ";"
default:
panic(fmt.Errorf("unknown token type %v", typ))
}
}
func (t Token) Name() string {
return Name(t.Type)
}

View File

@@ -1,31 +1,41 @@
// Package set defines a generic, mutable unordered set data structure.
package set
import "iter"
// A Set is an implementation of an mutable, unordered set. It uses a Golang map
// as its underlying data structure.
type Set[T comparable] map[T]bool
func (s *Set[T]) Add(items ...T) {
// Add appends a list of items into the set.
func (s Set[T]) Add(items ...T) {
for _, item := range items {
(*s)[item] = true
s[item] = true
}
}
// Has returns true an item is present in the set.
func (s Set[T]) Has(item T) bool {
return s[item]
}
func (s *Set[T]) Remove(items ...T) {
// Remove deletes a list of items from the set.
func (s Set[T]) Remove(items ...T) {
for _, item := range items {
delete(*s, item)
delete(s, item)
}
}
func (s *Set[T]) Merge(o *Set[T]) {
for item := range *o {
// Merge adds all items in the argument into the set. The argument is not
// mutated.
func (s Set[T]) Merge(o Set[T]) {
for item := range o {
s.Add(item)
}
}
// ToList returns all items present in the set, as a slice. The order of the
// items is not guaranteed.
func (s Set[T]) ToList() []T {
list := []T{}
@@ -36,6 +46,8 @@ func (s Set[T]) ToList() []T {
return list
}
// Items returns a sequence of all items present in the set. The order of the
// items is not guaranteed.
func (s Set[T]) Items() iter.Seq[T] {
return func(yield func(T) bool) {
for item := range s {
@@ -46,8 +58,9 @@ func (s Set[T]) Items() iter.Seq[T] {
}
}
func New[T comparable](items ...T) *Set[T] {
result := &Set[T]{}
// New creates a set of all items as argument.
func New[T comparable](items ...T) Set[T] {
result := Set[T]{}
for _, item := range items {
result.Add(item)

42
pkg/token/parse.go Normal file
View File

@@ -0,0 +1,42 @@
package token
import (
"fmt"
"git.maximhutz.com/max/lambda/pkg/iterator"
)
// ParseRawToken consumes the next token from the iterator if its type matches
// the expected type.
// Returns an error if the iterator is exhausted or the token type does not
// match.
func ParseRawToken[T Type](i *iterator.Iterator[Token[T]], expected T) (*Token[T], error) {
tok, err := i.Get()
if err != nil {
return nil, err
}
if tok.Type != expected {
return nil, fmt.Errorf("expected token '%v', got '%v'", expected.Name(), tok.Value)
}
i.Forward()
return &tok, nil
}
// ParseList repeatedly applies a parse function, collecting results into a
// slice.
// Stops when the parse function returns an error.
// Returns an error if fewer than minimum results are collected.
func ParseList[T Type, U any](i *iterator.Iterator[Token[T]], fn func(*iterator.Iterator[Token[T]]) (U, error), minimum int) ([]U, error) {
results := []U{}
for {
if u, err := fn(i); err != nil {
if len(results) < minimum {
return nil, fmt.Errorf("expected at least '%v' items, got only '%v': %w", minimum, len(results), err)
}
return results, nil
} else {
results = append(results, u)
}
}
}

129
pkg/token/scanner.go Normal file
View File

@@ -0,0 +1,129 @@
package token
import (
"errors"
"fmt"
"regexp"
"slices"
)
// A rule describes a single lexical pattern for the scanner.
type rule[T Type] struct {
pattern *regexp.Regexp
typ T
precedence int
skip bool
}
// compare orders rules by descending precedence.
func (r rule[T]) compare(other rule[T]) int {
return other.precedence - r.precedence
}
// An Option configures a Scanner during construction.
type Option[T Type] func(rules []rule[T]) []rule[T]
// On returns an option that registers a token-emitting rule.
// The token's value is the matched text.
// Higher precedence rules are tried first.
func On[T Type](pattern string, typ T, precedence int) Option[T] {
return func(rules []rule[T]) []rule[T] {
return append(rules, rule[T]{
pattern: compileAnchored(pattern),
typ: typ,
precedence: precedence,
})
}
}
// Skip returns an option that registers a non-emitting rule.
// This is used for whitespace and comments.
// Higher precedence rules are tried first.
func Skip[T Type](pattern string, precedence int) Option[T] {
return func(rules []rule[T]) []rule[T] {
return append(rules, rule[T]{
pattern: compileAnchored(pattern),
precedence: precedence,
skip: true,
})
}
}
// A Scanner is a declarative lexer built from a set of regex rules.
// Rules are sorted by precedence (highest first), with registration order as
// tiebreaker.
// At each position, the first matching rule wins.
type Scanner[T Type] struct {
rules []rule[T]
}
// NewScanner creates a Scanner by applying the given options and sorting the
// resulting rules by precedence.
func NewScanner[T Type](opts ...Option[T]) *Scanner[T] {
var rules []rule[T]
for _, opt := range opts {
rules = opt(rules)
}
slices.SortStableFunc(rules, rule[T].compare)
return &Scanner[T]{rules: rules}
}
// scanOne tries each rule at the current position and returns the first match.
// Returns the token (or nil if skipped) and the number of bytes consumed.
// Returns 0 if no rule matched.
func (s *Scanner[T]) scanOne(input string, pos int) (*Token[T], int) {
for _, r := range s.rules {
loc := r.pattern.FindStringIndex(input[pos:])
if loc == nil || loc[1] == 0 {
continue
}
if r.skip {
return nil, loc[1]
}
return &Token[T]{
Type: r.typ,
Value: input[pos : pos+loc[1]],
Column: pos,
}, loc[1]
}
return nil, 0
}
// Scan tokenizes the input string using the registered rules.
// At each position, rules are tried in precedence order and the first match
// wins.
// If no rule matches, an error is recorded and the scanner advances one byte.
func (s *Scanner[T]) Scan(input string) ([]Token[T], error) {
tokens := []Token[T]{}
errorList := []error{}
for pos := 0; pos < len(input); {
tok, n := s.scanOne(input, pos)
if n == 0 {
errorList = append(errorList, fmt.Errorf("unknown character '%v'", string(input[pos])))
pos++
continue
}
if tok != nil {
tokens = append(tokens, *tok)
}
pos += n
}
return tokens, errors.Join(errorList...)
}
// compileAnchored compiles a regex pattern, prepending \A so it only matches
// at the current scan position.
// Patterns must not be pre-anchored.
func compileAnchored(pattern string) *regexp.Regexp {
return regexp.MustCompile(`\A(?:` + pattern + `)`)
}

24
pkg/token/token.go Normal file
View File

@@ -0,0 +1,24 @@
// Package token provides generic token types and scanning/parsing primitives
// for building language-specific lexers and parsers.
package token
// A Type is a constraint for language-specific token type enums.
// It must be comparable (for equality checks) and must have a Name method
// that returns a human-readable string for error messages.
type Type interface {
comparable
// Name returns a human-readable name for this token type.
Name() string
}
// A Token is a lexical unit in a source language.
type Token[T Type] struct {
Column int // Where the token begins in the source text.
Type T // What type the token is.
Value string // The value of the token.
}
// Name returns the type of the Token, as a string.
func (t Token[T]) Name() string {
return t.Type.Name()
}

View File

@@ -1,25 +0,0 @@
package trace
import (
"errors"
"fmt"
"strings"
)
func Indent(s string, size int) string {
lines := strings.Lines(s)
indent := strings.Repeat(" ", size)
indented := ""
for line := range lines {
indented += indent + line
}
return indented
}
func Wrap(child error, format string, a ...any) error {
parent := fmt.Errorf(format, a...)
childErrString := Indent(child.Error(), 4)
return errors.New(parent.Error() + "\n" + childErrString)
}

File diff suppressed because one or more lines are too long

View File

@@ -1,8 +0,0 @@
0 := \f.\x.x
inc n := \f x.(f (n f x))
exp n m := (m n)
print n := (n F X)
N := (inc (inc (inc (inc (inc (inc 0))))))
(print (exp N N))