DecideAI logo
Redactor v0.2
A fully onchain abusive language detector. Enter some text to evaluate it by the model. The model increases the blur of text that is abusive.
Enter text:
Token:
No tokens to display.
Total score:
0.00
Adjust Bias:
-1.5
Adjust Scale:
4.5
Tokenization
Canister:
zmczv-cyaaa-aaaao-a34kq-cai
Query:
tokenize_text: (text) → (vec nat64, vec text) query
Inference
Canister:
zmczv-cyaaa-aaaao-a34kq-cai
Query:
inference: (vec int64) → (vec float32) query
DecideAI logo
Unlock your full potential with AI.
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
DecideAI
Innovate. Customize. Collaborate.
x_icondiscord_icontelegram_icon