Documentation Index
Fetch the complete documentation index at: https://docs.eachlabs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Safety Checker
All predictions run with the safety checker enabled by default (enable_safety_checker: true). When enabled, generated content is filtered for NSFW material. If the output gets flagged, the prediction returns a filtered result or an error.
To disable the safety checker, pass enable_safety_checker: false in the input object of your prediction request.
Usage
Supported Models
Theenable_safety_checker parameter is not available on all models. Only certain models let you toggle the safety filter. Models that don’t support it will just quietly ignore the parameter.
| Model | Slug | Type |
|---|---|---|
| Wan v2.6 Text-to-Video | wan-v2-6-text-to-video | Video Generation |
| Wan v2.6 Image-to-Video | wan-v2-6-image-to-video | Video Generation |
| Seedream v4.5 | seedream-v4-5-text-to-image | Image Generation |
Behavior Summary
enable_safety_checker | Supported Model | Unsupported Model |
|---|---|---|
true (default) | NSFW content filtered | NSFW content filtered |
false | NSFW filter disabled | Parameter ignored, filter stays on |
| Not provided | Same as true | Same as true |
each::sense
When using each::sense, passenable_safety_checker as a top-level request field instead of inside input:
The safety checker is a model-level feature. each::labs does not add an additional content filter on top of the model’s own safety system.