SLM Wiki
Marketplace and directory for specialised small language models in the 0.1B to 7B parameter range. Built for engineers picking a domain-tuned model against a real constraint - latency, cost, privacy, deployment surface - rather than a leaderboard.
- Status
- Live
- Client
- Lyon Industries
- Role
- Sole engineer
- Updated
- May 6, 2026

Overview
SLM Wiki is a live catalogue and publishing platform for specialised small language models - models in the 0.1B to 7B parameter range, tuned for a single domain rather than benchmarked on general knowledge. Coverage spans medical diagnosis, legal contract analysis, embedded systems work on STM32-class microcontrollers, and code security review, with new domains added as practitioners publish.
The framing the site adopts is unambiguous: a specialised SLM beats a frontier general model on its own ground when the constraint is latency, cost, or privacy. Sub-200 ms inference and a 100x to 500x cost reduction over frontier-model inference are the headline numbers; the deeper argument is that a model the size of a small docker image can be deployed on-prem, at the edge, or inside an air-gapped network without copying tenant data offsite.

Sections
- Models. Browse, preview, and pull specialised SLMs by domain, latency, cost, and licence.
- Train. Author and publish a custom model. Free or paid with revenue sharing.
- MCP. Connect any model on the marketplace to an agent or IDE through the Model Context Protocol.
- Enterprise. On-premises and edge deployment paths for teams with data residency or air-gap requirements.
- Research. Notes on SLM-versus-LLM behaviour, orchestration patterns, and the economics of a specialised-model marketplace.
Live surface
The catalogue is live at slmwiki.vercel.app. A separate write-up on the marketplace's economic model and the orchestration patterns that emerge when small specialised models replace a single general one is forthcoming.