Plotterfun Extended turns images into SVG line art designed for pen plotters.
It is a browser-first React/Vite evolution of the original Plotterfun by mitxela, with a modern UI, many additional algorithms, color-separation workflows, depth-aware rendering, and face-boundary tools.
Play with it live: https://makssurguy.com/labs/plotterfun-extended/
- Project Origin
- What This Project Does
- Key Features
- Tech Stack
- Architecture
- Algorithms Included
- Getting Started
- Usage Workflow
- How To Add A New Algorithm
- Extension Points
- Known Limitations
- Future Improvements
- Contributing
- License (MIT)
- Acknowledgments
- Upstream inspiration and core idea: Plotterfun by mitxela.
- This repository is an extended implementation (
@msurguy/plotterfun-extended) with:- React-based interface
- Vite + ES module workers
- Expanded algorithm set (40 workers)
- Color channel workflows (mono/CMYK/RGB/custom/plotterfun-color)
- In-browser depth estimation and face-boundary support
- Output sizing in physical units for plotting workflows
- Load an image or snapshot from webcam.
- Convert it into one or more vector path layers using a selected algorithm.
- Preview and tune output through algorithm-specific controls.
- Export clean SVG for plotting tools (Inkscape, vpype, AxiDraw workflows, etc).
Everything runs in the browser. There is no application backend in this repo.
- 40 plotter-focused algorithms as independent Web Workers
- Layered color output:
monocmykrgbcustomplotterfun-colormode (CMYK/RGB/custom variants)
- Dynamic algorithm controls generated by each worker
- Optional face-boundary masking and face crop clipping
- Optional depth-aware rendering:
- Uses
onnx-community/depth-anything-v2-small - First use downloads model assets (about 40MB)
- Uses
- Output sizing in
inormm, with optional auto-sync from pixel size and DPI - Pen width and mono pen-color controls applied directly to SVG strokes
- Path-length splitting for long exports (safer for downstream plotter pipelines)
- Light/Dark/System theming and local theme persistence
- Frontend: React 18, Zustand
- Build: Vite 5 (
type: module) - Workers: ES module Web Workers (one file per algorithm)
- Geometry/Math: d3-delaunay, pvectorjs, rbush, rbush-knn
- Image utilities: stackblur-canvas
- ML/vision:
@huggingface/transformersfor depth estimation@mediapipe/tasks-visionfor face landmarks
- UI icons: lucide-react
Input (image/webcam)
-> preview canvas
-> processing canvas (ImageData)
-> algorithm worker receives [config, imageData]
-> worker returns ['svg-path', pathData]
-> controller writes <path> nodes into SVG groups
-> export serialized SVG file
src/App.jsx- Main UI shell, tabs, color controls, algorithm selector, download action.
src/usePlotterController.js- Runtime orchestrator:
- Worker lifecycle
- Processing/reprocessing logic
- Palette/channel sequencing
- Face/depth caching
- SVG updates and export
- Runtime orchestrator:
src/store.js- Global app state (Zustand), including algorithm config and output settings.
src/helpers.js- Worker-side utilities:
defaultControls- pixel/depth processing
- face-boundary sync
- output helpers (
postLines,postCircles,postCurves)
- Worker-side utilities:
src/workers/*.js- Individual algorithms. Each worker defines its own controls and path generation.
src/palette.js- Palette resolution and channel-image generation logic.
src/face.js- MediaPipe face landmark loading + face oval polygon extraction.
src/lib/depth.js- Transformers.js depth model loading and normalization.
Workers communicate via tuple messages:
- Worker -> main:
['sliders', controls]['msg', statusText]['svg-path', pathData]['dbg', any](optional debug)
- Main -> worker:
- Initial:
[config, imageData] - Live update for
noRestartcontrols:[partialConfig]
- Initial:
When color output is enabled, the controller builds grayscale channel ImageData objects per palette channel and runs the worker once per channel. Paths are grouped in SVG by channel id/color.
src/
App.jsx
main.jsx
store.js
usePlotterController.js
helpers.js
palette.js
constants.js
utils.js
face.js
webcam.js
components/
AlgoControls.jsx
OutputSizeControls.jsx
CustomPaletteList.jsx
lib/
depth.js
clipper-wrapper.js
hatcher.js
workers/
*.js (40 algorithms)
docs/
*.md (reference notes for vision/depth/spatial libs)
40 worker algorithms are currently wired into the app:
- Classic line styles:
- Squiggle
- Squiggle Left/Right
- Spiral
- Polygon Spiral
- Sawtooth
- Springs
- Waves
- Needles
- Implode
- Longwave
- Subline
- Stipple/point-structure:
- Stipples
- Stipples (Depth)
- Delaunay
- Circle Pack
- Raster/scan/hatch families:
- Linedraw
- Mosaic
- Halftone
- Boxes
- Dots
- Jaggy
- Linescan
- Woven
- Peano
- Margins
- Crosshatch
- Hatch Weave
- Hatch Moire
- Hatch Burst
- Hatch Lattice
- Generative/field/growth:
- Flow Field
- Concentric
- Hex Grid
- Starburst
- Constellation
- Contours
- Warped Grid
- Differential Growth
- DLA Growth
- Reaction Diffusion
- Node.js 18+ recommended
- npm
npm installnpm run devnpm run build
npm run preview- Pick source:
Imagetab: load and pan/zoom previewWebcamtab: snapshot current frame (optional mirror)
- Pick an algorithm.
- Tune algorithm parameters in the
Parameterspanel. - Choose color mode and optional palette/channel settings.
- Set output dimensions (
inormm) and auto-sync behavior. - Export via
Download SVG.
- Create a new worker file in
src/workers/, for examplemyalgo.js. - Define controls first:
postMessage(['sliders', defaultControls.concat([{ label: 'My Param', value: 10, min: 1, max: 50 }])]);- Handle messages and output paths:
onmessage = function (e) {
const [config, pixData] = e.data;
const getPixel = pixelProcessor(config, pixData);
// build points/paths...
postLines(lines); // or postCircles / postCurves
};- Register the worker factory in
src/usePlotterController.js(workerModules). - Add the algorithm entry in
src/App.jsx(ALGORITHMS). - Validate in UI and test exported SVG in your plotting toolchain.
- If controls should update without restart, set
noRestart: trueon control metadata. - For responsive long computations, chunk work via async boundaries (for example
setTimeoutwrapped in promises) and post progress via['msg', '...'].
- Add new controls in workers:
range(default),checkbox,select- optional fields:
step,displayLabel,requiresFaceBoundary,deferRestart,noRestart
- Extend color behavior in
src/palette.js:- New palette presets
- New channel-building strategies
- Add new preprocessing in
src/helpers.js:- Alternative tonal transforms
- Custom masking modes
- Extend face/depth integrations:
- Different models in
src/face.js/src/lib/depth.js
- Different models in
- Improve export:
- Add metadata, layer naming conventions, or pen-order optimization before serialize
- No automated tests currently included.
- Worker registration is duplicated in two places:
src/App.jsx(labels)src/usePlotterController.js(worker factories)
- Large images + heavy algorithms can still be compute-intensive.
- First-time depth/face usage requires model downloads from remote hosts.
- No built-in plotter device integration (export is SVG-only).
- Central algorithm registry shared by UI and controller (single source of truth)
- URL-shareable presets (algorithm + control state + output settings)
- Worker benchmarks and runtime profiling panel
- Better global route optimization across multi-channel outputs
- Optional simplify/smooth post-pass for very dense paths
- Deterministic seed handling across all algorithms
- Drag-and-drop source image support and batch image processing mode
- Snapshot history and side-by-side compare mode
- Automated tests for:
- worker protocol
- palette/channel builders
- path splitting/export correctness
- Optional PWA/offline mode with pre-cached model assets
Contributions are welcome, especially:
- New or improved plotter algorithms
- Performance work in workers/pipeline
- Better export/toolchain interoperability
- Documentation and examples
When submitting algorithm changes, include:
- control descriptions
- expected runtime characteristics
- one or more sample SVG outputs
This project is MIT licensed.
Copyright (c) 2026 Plotterfun Extended contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- Original Plotterfun concept and algorithms by mitxela.
- Additional algorithm inspirations and ports credited in worker files and historical README notes:
- SquiggleCam / SquiggleDraw
- StippleGen
- linedraw.py by LingDong-
- Community algorithm contributions (Tim Koop, J-Waal, HomineLudens, MarkJB, labusaid, others)