HTTP API¶
Contents
Endpoints¶
POST /parse
¶
You must POST data in this format '{"q":"<your text to parse>"}'
,
you can do this with
$ curl -XPOST localhost:5000/parse -d '{"q":"hello there"}'
By default, when the project is not specified in the query, the
"default"
one will be used.
You can (should) specify the project you want to use in your query :
$ curl -XPOST localhost:5000/parse -d '{"q":"hello there", "project": "my_restaurant_search_bot"}'
By default the latest trained model for the project will be loaded. You can also query against a specific model for a project :
$ curl -XPOST localhost:5000/parse -d '{"q":"hello there", "project": "my_restaurant_search_bot", "model": "<model_XXXXXX>"}'
POST /train
¶
You can post your training data to this endpoint to train a new model for a project.
This request will wait for the server answer: either the model
was trained successfully or the training exited with an error.
Using the HTTP server, you must specify the project you want to train a
new model for to be able to use it during parse requests later on :
/train?project=my_project
. The configuration of the model should be
posted as the content of the request:
Using training data in json format:
language: "en"
pipeline: "spacy_sklearn"
# data contains the same json, as described in the training data section
data: {
"rasa_nlu_data": {
"common_examples": [
{
"text": "hey",
"intent": "greet",
"entities": []
}
]
}
}
Using training data in md format:
language: "en"
pipeline: "spacy_sklearn"
# data contains the same md, as described in the training data section
data: |
## intent:affirm
- yes
- yep
## intent:goodbye
- bye
- goodbye
Here is an example request showcasing how to send the config to the server to start the training:
$ curl -XPOST -H "Content-Type: application/x-yml" localhost:5000/train?project=my_project \
-d @sample_configs/config_train_server_md.yml
Note
The request should always be sent as application/x-yml regardless of wether you use json or md for the data format. Do not send json as application/json for example.
Note
You cannot send a training request for a project already training a new model (see below).
Note
The server will automatically generate a name for the trained model. If
you want to set the name yourself, call the endpoint using
localhost:5000/train?project=my_project&model=my_model_name
POST /evaluate
¶
You can use this endpoint to evaluate data on a model. The query string
takes the project
(required) and a model
(optional). You must
specify the project in which the model is located. N.b. if you don’t specify
a model, the latest one will be selected. This endpoint returns some common
sklearn evaluation metrics (accuracy, f1 score,
precision, as well as
a summary report).
$ curl -XPOST localhost:5000/evaluate?project=my_project&model=model_XXXXXX -d @data/examples/rasa/demo-rasa.json | python -mjson.tool
{
"accuracy": 0.19047619047619047,
"f1_score": 0.06095238095238095,
"precision": 0.036281179138321996,
"predictions": [
{
"intent": "greet",
"predicted": "greet",
"text": "hey",
"confidence": 1.0
},
...,
]
"report": ...
}
GET /status
¶
This returns all the currently available projects, their status (training
or ready
) and their models loaded in memory.
also returns a list of available projects the server can use to fulfill /parse
requests.
$ curl localhost:5000/status | python -mjson.tool
{
"available_projects": {
"my_restaurant_search_bot" : {
"status" : "ready",
"available_models" : [
<model_XXXXXX>,
<model_XXXXXX>
]
}
}
}
GET /version
¶
This will return the current version of the Rasa NLU instance, as well as the minimum model version required for loading models.
$ curl localhost:5000/version | python -mjson.tool
{
"version" : "0.13.0",
"minimum_compatible_version": "0.13.0"
}
GET /config
¶
This will return the default model configuration of the Rasa NLU instance.
$ curl localhost:5000/config | python -mjson.tool
{
"config": "/app/rasa_shared/config_mitie.json",
"data": "/app/rasa_nlu/data/examples/rasa/demo-rasa.json",
"duckling_dimensions": null,
"emulate": null,
...
}
DELETE /models
¶
This will unload a model from the server memory
$ curl -X DELETE localhost:5000/models?project=my_restaurant_search_bot&model=model_XXXXXX
Have questions or feedback?¶
We have a very active support community on Rasa Community Forum that is happy to help you with your questions. If you have any feedback for us or a specific suggestion for improving the docs, feel free to share it by creating an issue on Rasa NLU GitHub repository.