feat: add benchmark target to Makefile #14

Merged
mvhutz merged 2 commits from feat/benchmark-target into main 2026-01-11 22:48:27 +00:00
Owner

Description

This PR adds benchmarking capabilities to the lambda interpreter.
The benchmarks measure performance across all sample files in the samples folder.
This enables consistent performance testing and helps track optimization improvements over time.

Changes in this PR:

  • Added new bench target to Makefile for running Go benchmarks.
  • Created benchmark_test.go with sub-benchmarks for each sample file (Church, Fast, Saccharine, Simple, Thunk).
  • Used b.Run for organizing sample-specific sub-benchmarks and b.Loop for efficient iteration.
  • Configured benchmarks to use fixed iterations (10x) and 4 CPU cores for reproducible results.

Decisions

Used b.Loop() instead of traditional for i := 0; i < b.N; i++ pattern.
This is the modern Go benchmarking idiom that provides better performance measurement.

Benchmarks run the full pipeline (parse, compile, execute, stringify) to measure end-to-end performance for each sample.

Benefits

Provides quantitative performance metrics for the lambda interpreter.
Enables tracking performance improvements or regressions across different sample complexities.
Consistent benchmark configuration (fixed iterations, CPU cores) ensures reproducible results for comparison.

Checklist

  • Code follows conventional commit format.
  • Branch follows naming convention (<type>/<description>). Always use underscores.
  • Tests pass (if applicable).
  • Documentation updated (if applicable).
## Description This PR adds benchmarking capabilities to the lambda interpreter. The benchmarks measure performance across all sample files in the samples folder. This enables consistent performance testing and helps track optimization improvements over time. Changes in this PR: - Added new `bench` target to Makefile for running Go benchmarks. - Created `benchmark_test.go` with sub-benchmarks for each sample file (Church, Fast, Saccharine, Simple, Thunk). - Used `b.Run` for organizing sample-specific sub-benchmarks and `b.Loop` for efficient iteration. - Configured benchmarks to use fixed iterations (10x) and 4 CPU cores for reproducible results. ### Decisions Used `b.Loop()` instead of traditional `for i := 0; i < b.N; i++` pattern. This is the modern Go benchmarking idiom that provides better performance measurement. Benchmarks run the full pipeline (parse, compile, execute, stringify) to measure end-to-end performance for each sample. ## Benefits Provides quantitative performance metrics for the lambda interpreter. Enables tracking performance improvements or regressions across different sample complexities. Consistent benchmark configuration (fixed iterations, CPU cores) ensures reproducible results for comparison. ## Checklist - [x] Code follows conventional commit format. - [x] Branch follows naming convention (`<type>/<description>`). Always use underscores. - [x] Tests pass (if applicable). - [x] Documentation updated (if applicable).
mvhutz added 1 commit 2026-01-11 22:46:52 +00:00
Add bench target to run Go benchmarks for all samples in the samples folder.
Uses fixed iterations (10x) and 4 CPU cores for consistent results.
mvhutz added 1 commit 2026-01-11 22:48:10 +00:00
Replace individual benchmark functions with a single BenchmarkSamples function.
Use b.Run to create sub-benchmarks for each sample file.
Update all benchmarks to use b.Loop instead of manual b.N iteration.
mvhutz merged commit 609fe05250 into main 2026-01-11 22:48:27 +00:00
mvhutz deleted branch feat/benchmark-target 2026-01-11 22:48:27 +00:00
Sign in to join this conversation.