slonks → sladys
an on-chain art experiment about
neural reconstruction of miladys,
not perfect copying.
a tiny candidate-palette image model is trained on the top-1024 miladys downscaled to 24×24, then uploaded as bytecode. everytokenURI()read pulls the model’s render and the canonical milady (also stored on-chain) and counts the pixels where they disagree. that count is the token’s slop.
* default mintPrice = 0, owner can adjust before opening public mint.
mint
0 / 1024one slady, three views
every token is rendered as three layers: what the model produced, the canonical milady it was trying to reconstruct, and a one-bit mask of the pixels where the two disagree.
sladys are not pixel-perfect copies of miladys. a tiny image model — small enough to live in contract bytecode — is asked to reconstruct each milady from a single 10-byte embedding. it gets close, but never exact. the diff between the model’s output and the original is the slop: a per-token integer that rewards reconstructions that nailed it and exposes the ones that didn’t.
the slop mask, the model render, and the original milady are all composited into a single SVG inside tokenURI(). nothing is served from IPFS. nothing is generated off-chain. the JSON is just a base64 of three on-chain reads.
the model
a candidate-palette renderer. for every pixel, the model picks one of K palette indexes by taking an argmax of an inner product against learned per-pixel heads.
instead of regressing 24×24×3 = 1728 floats per token, the model emits a single index per pixel into a 256-color palette. every pixel has its own little vocabulary of K=18 candidate colors, and a learned head decides which one wins. that’s it. no decoder MLP, no attention layers, no FP16.
slot[token, pixel] = argmax_k dot( embedding[token], head[pixel, k] )
palette_index = candidates[pixel, slot]
rgba = PALETTE_RGBA[palette_index]embeddings, heads, and per-pixel candidate tables are all stored as int8 in SSTORE2 chunks, and read back inside an inline-assembly hot loop. the renderer walks 576 pixels, does 18 dot products each, and emits 576 palette indexes. then MiladysData turns those into 2304 RGBA bytes via a singlePALETTE_RGBA lookup.
target accuracy is ~95% pixel match on the training set, mirroring slonks (95.77%). some sladys will reach exact (slop = 0) — slonks had 32 of those out of 10000. expect a similar share here.
slop is recomputed live
nothing about the slop is cached. every read of tokenURI runs the model, reads the original milady, walks 576 pixels, and OR-s a 72-byte mask while incrementing a counter.
if the renderer is ever swapped (it’s the only mutable piece of the system), the slop value can shift to reflect the new logic. the model itself is locked, so the underlying art doesn’t move — only the way it’s framed.
function _slopMaskAndValue(bytes pixels, bytes rgbaO)
internal pure returns (bytes mask, uint16 slop)
{
mask = new bytes(72); // 576 / 8
for (uint i; i < 576; ++i) {
bytes4 a = bytes4(uint32(uint8(pixels[i]))); // model index
bytes4 b = bytes4(rgbaO[i*4 : i*4 + 4]); // original rgba
if ( PALETTE_RGBA[a] != b ) {
mask[i >> 3] |= bytes1(1 << (i & 7));
unchecked { ++slop; }
}
}
}the mask is rendered as a third overlay layer in the SVG, so you can literally see which pixels the model got wrong. tokens with low slop look almost identical to their source milady. tokens with high slop look like the model gave up halfway and reached for the nearest approximation it could afford.
merging
two same-level sladys can be merged. the donor is burned, the survivor inherits a blended embedding, and its merge level ticks up by one.
merge is a destructive, irreversible operation: you pick a survivor and a donor, both at the same merge level, both owned by you. the merge manager loads both 10-byte embeddings, takes a signed int8 element-wise mean, packs the result, increments mergeLevel[survivor], and burns the donor.
function merge(uint256 survivor, uint256 donor) external {
require(_isOwner(msg.sender, survivor) && _isOwner(msg.sender, donor));
require(mergeLevel[survivor] == mergeLevel[donor]);
int8[10] memory a = _readEmbedding(survivor);
int8[10] memory b = _readEmbedding(donor);
int8[10] memory blended;
for (uint i; i < 10; ++i) {
blended[i] = int8((int16(a[i]) + int16(b[i])) / 2);
}
packedEmbedding[survivor] = _pack(blended);
mergeLevel[survivor]++; // capped at 255
sladys.burn(donor); // mergeManager-only
emit SladyMerged(survivor, donor, mergeLevel[survivor]);
}survivor’s anchor milady (its sourceId) does not change — the slop comparison still uses the same canonical milady. so as you merge, your render drifts away from the original, and slop usually goes up. that’s the trade.
contracts
five contracts. four mirror slonks one-to-one. the fifth — MiladysData — is the differentiator: a full on-chain mirror of the 1024 selected miladys.
Sladys
ERC-721 shell. Mint, supply, ownership, reveal.
SladysImageModel
Candidate-palette renderer. vocab=1024, embedDim=10, K=18.
SladysMergeManager
Merge two same-level tokens. Burns donor.
SladysRenderer
tokenURI JSON builder. Computes slop live.
MiladysData
newOn-chain mirror of the 1024 selected Miladys. 256-color palette + packed traits.
the tokenURI read path looks like this — three contract calls, no off-chain fetch:
function tokenURI(uint256 tokenId) external view returns (string memory) {
uint16 sourceId = sladys.sourceIdFor(tokenId);
uint8 level = mergeManager.mergeLevel(tokenId);
bytes memory pixels = level == 0
? imageModel.renderSourcePixels(sourceId)
: imageModel.renderEmbeddingPixels(mergeManager.mergeEmbedding(tokenId));
bytes memory rgbaO = miladysData.miladyImage(sourceId); // 2304 B
(bytes memory mask, uint16 slop) = _slopMaskAndValue(pixels, rgbaO);
bytes memory svg = _buildSvg(pixels, rgbaO, mask);
string memory traits = miladysData.miladyAttributes(sourceId);
return _jsonBase64({ name, image: svg, animation_url, attributes });
}palette
256 colors derived from the 1024 downscaled miladys via median-cut. no nearest-color quantization at render time — the palette covers all source pixels exactly.
median-cut over 1024 × 576 = 589,824 source pixels gives a fixed 256-entry palette that’s wide enough to hold every original color a milady actually uses. the model never has to reach for an approximate. that’s the only way the slop value stays meaningful: a wrong pixel is the model’s fault, not a quantization artifact.
the palette lives on MiladysData.PALETTE_RGBA: 256 × 4 = 1024 bytes. the renderer indexes into it for both the model output and the canonical milady, so both layers share a colorspace by construction. swatches above are a stand-in; the real palette will be baked from the curated 1024 at deploy.