Docs Navigation +


The most important functionality of the API is doing fuzzy inference with a single agent. In fuzzy inference, you provide input values to the fuzzy agent, and it makes decisions based on those inputs to provide you with outputs.

Most applications will only need this endpoint.

POST<agent id>

To do inference, do an HTTP POST request to the API URL for your agent. This requires the  agent id, which is available on the agent page in the Web UI.


The body of the request must be UTF-8-encoded JSON. It is an object whose properties are the names of the inputs for your agent, and whose values are integers or floating-point numbers that are the values of those inputs. For example:

"numberOfLikes": 18,
"numberOfShares": 4,
"age": 33

Be careful not to include strings or other JSON types as the values of the inputs. only works with numbers!


The body of the response is an object whose properties are the names of the outputs of your agent, and whose values are the (inferred) values of those outputs. For example:

"relevance": 50.5

Note that the agent will return an object even if there is only one output.

The response will include a custom HTTP header, X-Evaluation-ID. You can use this ID for the evaluation portion of the API, described below.

Batch inference

Alternately, you can post a batch of inputs. This is a JSON array, with each element an object. For example:

  "numberOfLikes": 4,
  "numberOfShares": 8,
  "age": 15
  "numberOfLikes": 16,
  "numberOfShares": 23,
  "age": 42

For your API limit, each object counts as an evaluation! It is recommended to have at most 100 objects per batch. There is a hard limit of 4096 objects per batch.

If you posted a batch of inputs, you will get a batch of outputs back:

  "relevance": 4.8
  "relevance": 15.16

The batch results are in order; the inputs at position 3 in the request array will generate the output in position 3 in the response array.

If you do batch evaluations, the X-Evaluation-ID will not be provided. If you need evaluation IDs for learning, you need to set the meta flag, as described below.


It is possible to get the metadata for an inference in the results of the inference. To do this, add a meta parameter to the inference endpoint.

POST<agent id>?meta=true

If the meta parameter is any truthy-sounding value, like true, on, yes, or 1, then the metadata for the evaluation will be returned in the outputs as a property meta.

If the parameter is any other value, then the metadata will be returned in the outputs as a property with that value as the name. So meta=audit will result in an audit property in the outputs. This would be useful on the unlikely chance that you already have a meta property in your outputs.

Each meta property in the outputs will be an object with the same properties as an evaluation.

If you do batch inference, and you include the meta flag, a meta property is added to each output.