## Description Both the `saccharine` and `lambda` packages need tokenizing and parsing primitives. This PR extracts shared token infrastructure into a new `pkg/token` package, then wires both languages up to use it. - Add `pkg/token` with a generic `Token[T]` type, `Scan`, `ScanAtom`, `ScanRune`, `ScanCharacter`, `IsVariable`, `ParseRawToken`, and `ParseList`. - Refactor `pkg/saccharine` to delegate to `pkg/token`, removing duplicated scanning and parsing helpers. - Implement `Codec.Decode` for `pkg/lambda` (scanner + parser) using the shared token package. - Add `iterator.While` for predicate-driven iteration. - Rename `iterator.Do` to `iterator.Try` to better describe its rollback semantics. ### Decisions - The `Type` constraint (`comparable` + `Name() string`) keeps the generic token flexible while ensuring every token type can produce readable error messages. - `iterator.Do` was renamed to `iterator.Try` since it describes a try/rollback operation, not a side-effecting "do". ## Benefits - Eliminates duplicated token, scanning, and parsing code between languages. - Enables the `lambda` package to decode (parse) lambda calculus strings, which was previously unimplemented. - Makes it straightforward to add new languages by reusing `pkg/token` primitives. ## Checklist - [x] Code follows conventional commit format. - [x] Branch follows naming convention (`<type>/<description>`). Always use underscores. - [x] Tests pass (if applicable). - [ ] Documentation updated (if applicable). Reviewed-on: #46 Co-authored-by: M.V. Hutz <git@maximhutz.me> Co-committed-by: M.V. Hutz <git@maximhutz.me>
65 lines
1.6 KiB
Go
65 lines
1.6 KiB
Go
package saccharine
|
|
|
|
import (
|
|
"fmt"
|
|
"unicode"
|
|
|
|
"git.maximhutz.com/max/lambda/pkg/iterator"
|
|
"git.maximhutz.com/max/lambda/pkg/token"
|
|
)
|
|
|
|
// Pulls the next token from an iterator over runes. If it cannot, it will
|
|
// return nil. If an error occurs, it will return that.
|
|
func scanToken(i *iterator.Iterator[rune]) (*Token, error) {
|
|
index := i.Index()
|
|
|
|
if i.Done() {
|
|
return nil, nil
|
|
}
|
|
|
|
letter, err := i.Next()
|
|
if err != nil {
|
|
return nil, fmt.Errorf("cannot produce next token: %w", err)
|
|
}
|
|
|
|
switch {
|
|
case letter == '(':
|
|
return token.New(TokenOpenParen, index), nil
|
|
case letter == ')':
|
|
return token.New(TokenCloseParen, index), nil
|
|
case letter == '.':
|
|
return token.New(TokenDot, index), nil
|
|
case letter == '\\':
|
|
return token.New(TokenSlash, index), nil
|
|
case letter == '\n':
|
|
return token.New(TokenSoftBreak, index), nil
|
|
case letter == '{':
|
|
return token.New(TokenOpenBrace, index), nil
|
|
case letter == '}':
|
|
return token.New(TokenCloseBrace, index), nil
|
|
case letter == ':':
|
|
if _, err := token.ScanCharacter(i, '='); err != nil {
|
|
return nil, err
|
|
} else {
|
|
return token.New(TokenAssign, index), nil
|
|
}
|
|
case letter == ';':
|
|
return token.New(TokenHardBreak, index), nil
|
|
case letter == '#':
|
|
// Skip everything until the next newline or EOF.
|
|
i.While(func(r rune) bool { return r != '\n' })
|
|
return nil, nil
|
|
case unicode.IsSpace(letter):
|
|
return nil, nil
|
|
case token.IsVariable(letter):
|
|
return token.ScanAtom(i, letter, TokenAtom, index), nil
|
|
}
|
|
|
|
return nil, fmt.Errorf("unknown character '%v'", string(letter))
|
|
}
|
|
|
|
// scan a string into tokens.
|
|
func scan(input string) ([]Token, error) {
|
|
return token.Scan(input, scanToken)
|
|
}
|