Inferometer
Measure inference latency & visualize API responses.
Model / Endpoint URL
HTTP Method
POST
GET
Content type
application/json
multipart/form-data
Request body (JSON) — example input
{"input":"The quick brown fox"}
Tip: leave endpoint empty to run a local mock inference that demonstrates latency and output format.
Run Inference
Clear Results
Insert Sample Image Request
Last latency
— ms
Average latency
— ms
Calls
0
Response
No results yet.
Configuration & Headers
Custom headers (JSON)
{"Authorization":"Bearer YOUR_KEY"}
Advanced
Timeout:
Repeat:
Delay(ms):
Notes
This is a static client app. To measure a real model hosted behind an API, provide a reachable endpoint and a valid Authorization header if required.