You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[Install TorchServe on Windows](docs/torchserve_on_win_native.md)
21
21
*[Install TorchServe on Windows Subsystem for Linux](docs/torchserve_on_wsl.md)
22
22
*[Serve a Model](#serve-a-model)
23
23
*[Quick start with docker](#quick-start-with-docker)
24
24
*[Contributing](#contributing)
25
25
26
-
## Install TorchServe
26
+
## Install TorchServe and torch-model-archiver
27
27
28
28
1. Install dependencies
29
29
@@ -90,7 +90,7 @@ For information about the model archiver, see [detailed documentation](model-arc
90
90
91
91
## Serve a model
92
92
93
-
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-with-pip).
93
+
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-torchserve-and-torch-model-archiver).
94
94
95
95
To run this example, clone the TorchServe repository:
Copy file name to clipboardExpand all lines: benchmarks/README.md
+10-10
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ We currently support benchmarking with JMeter & Apache Bench. One can also profi
12
12
13
13
## Installation
14
14
15
-
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
15
+
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.
The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md). We currently support the following:
47
+
The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](../docs/model_zoo.md). We currently support the following:
@@ -63,7 +63,7 @@ We also support compound benchmarks:
63
63
64
64
#### Using pre-build docker image
65
65
66
-
* You can specify, docker image using --docker option. You must create docker by following steps given [here](https://github.com/pytorch/serve/tree/master/docker).
66
+
* You can specify, docker image using --docker option. You must create docker by following steps given [here](../docker/README.md).
67
67
68
68
```bash
69
69
cd serve/benchmarks
@@ -81,7 +81,7 @@ NOTE - '--docker' and '--ts' are mutually exclusive options
81
81
82
82
#### Using local TorchServe instance:
83
83
84
-
* Install TorchServe using the [install guide](../README.md#install-torchserve)
84
+
* Install TorchServe using the [install guide](../README.md#install-torchserve-and-torch-model-archiver)
85
85
* Start TorchServe using following command :
86
86
87
87
```bash
@@ -166,13 +166,13 @@ Using ```https``` instead of ```http``` as the choice of protocol might not work
166
166
The full list of options can be found by running with the -h or --help flags.
167
167
168
168
## Adding test plans
169
-
Refer [adding a new jmeter](NewTestPlan.md) test plan for torchserve.
169
+
Refer [adding a new jmeter](add_jmeter_test.md) test plan for torchserve.
170
170
171
171
# Benchmarking with Apache Bench
172
172
173
173
## Installation
174
174
175
-
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
175
+
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.
176
176
177
177
### pip dependencies
178
178
@@ -204,7 +204,7 @@ Refer [parameters section](#benchmark-parameters) for more details on configurab
204
204
`python benchmark-ab.py`
205
205
206
206
### Run benchmark with a test plan
207
-
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans) for more details.
207
+
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans) for more details.
208
208
`python benchmark-ab.py <test plan>`
209
209
210
210
### Run benchmark with a customized test plan
@@ -238,7 +238,7 @@ This command will use all the configuration parameters given in config.json file
238
238
```
239
239
### Benchmark parameters
240
240
The following parameters can be used to run the AB benchmark suite.
241
-
- url: Input model URL. Default: "https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar"
241
+
- url: Input model URL. Default: `https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar`
To create mar [model archive] file for torchserve deployment, you can use following steps
202
+
To create mar [model archive] file for TorchServe deployment, you can use following steps
203
203
204
204
1. Start container by sharing your local model-store/any directory containing custom/example mar contents as well as model-store directory (if not there, create it)
### Can I run Torchserve APIs on ports other than the default 8080 & 8081?
44
44
Yes, Torchserve API ports are configurable using a properties file or environment variable.
45
-
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
45
+
Refer [configuration.md](configuration.md) for more details.
46
46
47
47
48
48
### How can I resolve model specific python dependency?
49
49
You can provide a requirements.txt while creating a mar file using "--requirements-file/ -r" flag. Also, you can add dependency files using "--extra-files" flag.
50
-
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
50
+
Refer [configuration.md](configuration.md) for more details.
51
51
52
52
### Can I deploy Torchserve in Kubernetes?
53
53
Yes, you can deploy Torchserve in Kubernetes using Helm charts.
54
-
Refer [Kubernetes deployment ](https://github.com/pytorch/serve/blob/master/kubernetes/README.md) for more details.
54
+
Refer [Kubernetes deployment ](../kubernetes/README.md) for more details.
55
55
56
56
### Can I deploy Torchserve with AWS ELB and AWS ASG?
57
57
Yes, you can deploy Torchserve on a multinode ASG AWS EC2 cluster. There is a cloud formation template available [here](https://github.com/pytorch/serve/blob/master/cloudformation/ec2-asg.yaml) for this type of deployment. Refer [ Multi-node EC2 deployment behind Elastic LoadBalancer (ELB)](https://github.com/pytorch/serve/tree/master/cloudformation#multi-node-ec2-deployment-behind-elastic-loadbalancer-elb) more details.
58
58
59
59
### How can I backup and restore Torchserve state?
60
60
TorchServe preserves server runtime configuration across sessions such that a TorchServe instance experiencing either a planned or unplanned service stop can restore its state upon restart. These saved runtime configuration files can be used for backup and restore.
61
-
Refer [TorchServe model snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md#torchserve-model-snapshot) for more details.
61
+
Refer [TorchServe model snapshot](snapshot.md#torchserve-model-snapshot) for more details.
62
62
63
63
### How can I build a Torchserve image from source?
64
-
Torchserve has a utility [script]([https://github.com/pytorch/serve/blob/master/docker/build_image.sh](https://github.com/pytorch/serve/blob/master/docker/build_image.sh)) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
64
+
Torchserve has a utility [script](../docker/build_image.sh) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
65
65
66
66
All these docker images can be created using `build_image.sh` with appropriate options.
67
67
68
68
Run `./build_image.sh --help` for all availble options.
69
69
70
-
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image-from-source) for more details.
70
+
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image) for more details.
71
71
72
72
### How to build a Torchserve image for a specific branch or commit id?
73
73
To create a Docker image for a specific branch, use the following command:
@@ -84,50 +84,50 @@ The image created using Dockerfile.dev has Torchserve installed from source wher
### What can I use other than *curl* to make requests to Torchserve?
90
90
You can use any tool like Postman, Insomnia or even use a python script to do so. Find sample python script [here](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers).
91
91
92
92
### How can I add a custom API to an existing framework?
93
93
You can add a custom API using **plugins SDK** available in Torchserve.
94
-
Refer to [serving sdk](https://github.com/pytorch/serve/blob/master/serving-sdk) and [plugins](https://github.com/pytorch/serve/blob/master/plugins) for more details.
94
+
Refer to [serving sdk](../serving-sdk) and [plugins](../plugins) for more details.
95
95
96
96
### How can pass multiple images in Inference request call to my model?
97
97
You can provide multiple data in a single inference request to your custom handler as a key-value pair in the `data` object.
98
98
Refer [this](https://github.com/pytorch/serve/issues/529#issuecomment-658012913) for more details.
You would have to write a custom handler with the post processing to return image.
107
-
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
107
+
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
108
108
109
109
### How to enhance the default handlers?
110
110
Write a custom handler that extends the default handler and just override the methods to be tuned.
111
-
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
111
+
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
112
112
113
113
### Do I always have to write a custom handler or are there default ones that I can use?
114
114
Yes, you can deploy your model with no-code/ zero code by using builtin default handlers.
115
-
Refer [default handlers](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers) for more details.
115
+
Refer [default handlers](default_handlers.md#torchserve-default-inference-handlers) for more details.
116
116
117
117
### Is it possible to deploy Hugging Face models?
118
118
Yes, you can deploy Hugging Face models using a custom handler.
119
-
Refer [Huggingface_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md) for example.
119
+
Refer [Huggingface_Transformers](../examples/Huggingface_Transformers/README.md) for example.
A mar file is a zip file consisting of all model artifacts with the ".mar" extension. The cmd-line utility *torch-model-archiver* is used to create a mar file.
128
128
129
129
### How can create mar file using Torchserve docker container?
130
-
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container).
130
+
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](../docker/README.md#create-torch-model-archiver-from-container).
131
131
132
132
### Can I add multiple serialized files in single mar file?
133
133
Currently `TorchModelArchiver` allows supplying only one serialized file with `--serialized-file` parameter while creating the mar. However, you can supply any number and any type of file with `--extra-files` flag. All the files supplied in the mar file are available in `model_dir` location which can be accessed through the context object supplied to the handler's entry point.
@@ -137,7 +137,7 @@ Sample code snippet:
137
137
properties = context.system_properties
138
138
model_dir = properties.get("model_dir")
139
139
```
140
-
Refer [Torch model archiver cli](https://github.com/pytorch/serve/blob/master/model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
140
+
Refer [Torch model archiver cli](../model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
0 commit comments