Skip to content

Commit bacdf0a

Browse files
harshbafnashivamshriwasdhaniram-kshirsagarmaaquibdhanainme
authored
[WIP] Documentation fixes and enhancements (#584)
* added torch-model-archiver in bug template * fixed broken links in main README * refactored image_classifier example readme * minor fixes in docker documentation * refactored examples main readme * fixed broken link issue in batch inference documentation * updated model archiver documentation with details for reqirements file * added markdown link check in sanity script * install npm markdown package in builspec.yml * fixed broken links * link checker script fixes and doc fixes * Updated squeezenet readme - updated builspec node installation steps * adds comment for pytest failure check * install nodejs * fixed link check disabled for localhost urls * uncommented link checker * incorporated doc review comments Co-authored-by: Aaqib <[email protected]> * updated path in instructions * fixed broken links * fixed link checker issues * link fixes * updated ubuntu regression log links * updated links Co-authored-by: Shivam Shriwas <[email protected]> Co-authored-by: dhaniram-kshirsagar <[email protected]> Co-authored-by: Aaqib <[email protected]> Co-authored-by: dhanainme <[email protected]>
1 parent 8ecd581 commit bacdf0a

File tree

27 files changed

+336
-194
lines changed

27 files changed

+336
-194
lines changed

.github/ISSUE_TEMPLATE/bug_template.md

+1
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ Please search on the [issue tracker](https://github.com/pytorch/serve/issues) be
1212
<!--- How has this issue affected you? What are you trying to accomplish? -->
1313
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
1414
* torchserve version:
15+
* torch-model-archiver version:
1516
* torch version:
1617
* torchvision version [if any]:
1718
* torchtext version [if any]:

CODE_OF_CONDUCT.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ a project may be further defined and clarified by project maintainers.
5555
## Enforcement
5656

5757
Instances of abusive, harassing, or otherwise unacceptable behavior may be
58-
reported by contacting the project team at <[email protected]>. All
58+
reported by contacting the project team at \<[email protected]\>. All
5959
complaints will be reviewed and investigated and will result in a response that
6060
is deemed necessary and appropriate to the circumstances. The project team is
6161
obligated to maintain confidentiality with regard to the reporter of an incident.

README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,14 @@ TorchServe is a flexible and easy to use tool for serving PyTorch models.
1616

1717
## Contents of this Document
1818

19-
* [Install TorchServe](#install-torchserve)
19+
* [Install TorchServe](#install-torchserve-and-torch-model-archiver)
2020
* [Install TorchServe on Windows](docs/torchserve_on_win_native.md)
2121
* [Install TorchServe on Windows Subsystem for Linux](docs/torchserve_on_wsl.md)
2222
* [Serve a Model](#serve-a-model)
2323
* [Quick start with docker](#quick-start-with-docker)
2424
* [Contributing](#contributing)
2525

26-
## Install TorchServe
26+
## Install TorchServe and torch-model-archiver
2727

2828
1. Install dependencies
2929

@@ -90,7 +90,7 @@ For information about the model archiver, see [detailed documentation](model-arc
9090

9191
## Serve a model
9292

93-
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-with-pip).
93+
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-torchserve-and-torch-model-archiver).
9494

9595
To run this example, clone the TorchServe repository:
9696

@@ -156,7 +156,7 @@ pip install -U grpcio protobuf grpcio-tools
156156
python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=scripts --grpc_python_out=scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto
157157
```
158158

159-
- Run inference using a sample client [gRPC python client](scripts/torchserve_grpc_client.py)
159+
- Run inference using a sample client [gRPC python client](ts_scripts/torchserve_grpc_client.py)
160160

161161
```bash
162162
python scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg

benchmarks/README.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ We currently support benchmarking with JMeter & Apache Bench. One can also profi
1212

1313
## Installation
1414

15-
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
15+
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.
1616

1717
### Ubuntu
1818

@@ -44,7 +44,7 @@ python3 windows_install_dependencies.py "C:\\Program Files"
4444

4545
## Models
4646

47-
The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md). We currently support the following:
47+
The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](../docs/model_zoo.md). We currently support the following:
4848
- [resnet: ResNet-18 (Default)](https://torchserve.pytorch.org/mar_files/resnet-18.mar)
4949
- [squeezenet: SqueezeNet V1.1](https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar)
5050

@@ -63,7 +63,7 @@ We also support compound benchmarks:
6363

6464
#### Using pre-build docker image
6565

66-
* You can specify, docker image using --docker option. You must create docker by following steps given [here](https://github.com/pytorch/serve/tree/master/docker).
66+
* You can specify, docker image using --docker option. You must create docker by following steps given [here](../docker/README.md).
6767

6868
```bash
6969
cd serve/benchmarks
@@ -81,7 +81,7 @@ NOTE - '--docker' and '--ts' are mutually exclusive options
8181

8282
#### Using local TorchServe instance:
8383

84-
* Install TorchServe using the [install guide](../README.md#install-torchserve)
84+
* Install TorchServe using the [install guide](../README.md#install-torchserve-and-torch-model-archiver)
8585
* Start TorchServe using following command :
8686

8787
```bash
@@ -166,13 +166,13 @@ Using ```https``` instead of ```http``` as the choice of protocol might not work
166166
The full list of options can be found by running with the -h or --help flags.
167167

168168
## Adding test plans
169-
Refer [adding a new jmeter](NewTestPlan.md) test plan for torchserve.
169+
Refer [adding a new jmeter](add_jmeter_test.md) test plan for torchserve.
170170

171171
# Benchmarking with Apache Bench
172172

173173
## Installation
174174

175-
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
175+
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.
176176

177177
### pip dependencies
178178

@@ -204,7 +204,7 @@ Refer [parameters section](#benchmark-parameters) for more details on configurab
204204
`python benchmark-ab.py`
205205

206206
### Run benchmark with a test plan
207-
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans ) for more details.
207+
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans) for more details.
208208
`python benchmark-ab.py <test plan>`
209209

210210
### Run benchmark with a customized test plan
@@ -238,7 +238,7 @@ This command will use all the configuration parameters given in config.json file
238238
```
239239
### Benchmark parameters
240240
The following parameters can be used to run the AB benchmark suite.
241-
- url: Input model URL. Default: "https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar"
241+
- url: Input model URL. Default: `https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar`
242242
- device: Execution device type. Default: cpu
243243
- exec_env: Execution environment. Default: docker
244244
- concurrency: Concurrency of requests. Default: 10
@@ -275,7 +275,7 @@ The reports are generated at location "/tmp/benchmark/"
275275
### Sample output CSV
276276
| Benchmark | Model | Concurrency | Requests | TS failed requests | TS throughput | TS latency P50 | TS latency P90| TS latency P90 | TS latency mean | TS error rate | Model_p50 | Model_p90 | Model_p99 |
277277
|---|---|---|---|---|---|---|---|---|---|---|---|---| ---|
278-
| AB | https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar | 10 | 100 | 0 | 15.66 | 512 | 1191 | 2024 | 638.695 | 0 | 196.57 | 270.9 | 106.53|
278+
| AB | [squeezenet1_1](https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar) | 10 | 100 | 0 | 15.66 | 512 | 1191 | 2024 | 638.695 | 0 | 196.57 | 270.9 | 106.53|
279279

280280
### Sample latency graph
281281
![](predict_latency.png)
@@ -301,7 +301,7 @@ The benchmarks can also be used to analyze the backend performance using cProfil
301301

302302
Using local TorchServe instance:
303303

304-
* Install TorchServe using the [install guide](../README.md#install-torchserve)
304+
* Install TorchServe using the [install guide](../README.md#install-torchserve-and-torch-model-archiver)
305305

306306
By using external docker container for TorchServe:
307307

docker/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -10,17 +10,17 @@
1010
* docker - Refer to the [official docker installation guide](https://docs.docker.com/install/)
1111
* git - Refer to the [official git set-up guide](https://help.github.com/en/github/getting-started-with-github/set-up-git)
1212
* For base Ubuntu with GPU, install following nvidia container toolkit and driver-
13-
* [Nvidia container toolkit](https://github.com/NVIDIA/nvidia-docker#ubuntu-160418042004-debian-jessiestretchbuster)
13+
* [Nvidia container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)
1414
* [Nvidia driver](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html)
1515

1616
* NOTE - Dockerfiles have not been tested on windows native platform.
1717

1818
## First things first
1919

20+
If you have not cloned TorchServe source then:
2021
```bash
21-
1. If you have not clone torchserve source then:
2222
git clone https://github.com/pytorch/serve.git
23-
2. cd serve/docker
23+
cd serve/docker
2424
```
2525

2626
# Create TorchServe docker image
@@ -199,7 +199,7 @@ curl http://localhost:8080/ping
199199

200200
# Create torch-model-archiver from container
201201

202-
To create mar [model archive] file for torchserve deployment, you can use following steps
202+
To create mar [model archive] file for TorchServe deployment, you can use following steps
203203

204204
1. Start container by sharing your local model-store/any directory containing custom/example mar contents as well as model-store directory (if not there, create it)
205205

docs/FAQs.md

+21-21
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ Torchserve API's are compliant with the [OpenAPI specification 3.0](https://swag
1515

1616
### How to use Torchserve in production?
1717
Depending on your use case, you will be able to deploy torchserve in production using following mechanisms.
18-
> Standalone deployment. Refer https://github.com/pytorch/serve/docker or https://github.com/pytorch/serve/docs/README.md
19-
> Cloud based deployment. Refer https://github.com/pytorch/serve/kubernetes https://github.com/pytorch/serve/cloudformation
18+
> Standalone deployment. Refer [TorchServe docker documentation](../docker/README.md) or [TorchServe documentation](../docs/README.md)
19+
> Cloud based deployment. Refer [TorchServe kubernetes documentation](../kubernetes/README.md) or [TorchServe cloudformation documentation](../cloudformation/README.md)
2020
2121

2222
### What's difference between Torchserve and a python web app using web frameworks like Flask, Django?
@@ -38,36 +38,36 @@ Relevant documents.
3838
- [Torchserve configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md)
3939
- [Model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
4040
- [Snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md)
41-
- [Docker]([https://github.com/pytorch/serve/blob/master/docker/README.md](https://github.com/pytorch/serve/blob/master/docker/README.md))
41+
- [Docker](../docker/README.md)
4242

4343
### Can I run Torchserve APIs on ports other than the default 8080 & 8081?
4444
Yes, Torchserve API ports are configurable using a properties file or environment variable.
45-
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
45+
Refer [configuration.md](configuration.md) for more details.
4646

4747

4848
### How can I resolve model specific python dependency?
4949
You can provide a requirements.txt while creating a mar file using "--requirements-file/ -r" flag. Also, you can add dependency files using "--extra-files" flag.
50-
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
50+
Refer [configuration.md](configuration.md) for more details.
5151

5252
### Can I deploy Torchserve in Kubernetes?
5353
Yes, you can deploy Torchserve in Kubernetes using Helm charts.
54-
Refer [Kubernetes deployment ](https://github.com/pytorch/serve/blob/master/kubernetes/README.md) for more details.
54+
Refer [Kubernetes deployment ](../kubernetes/README.md) for more details.
5555

5656
### Can I deploy Torchserve with AWS ELB and AWS ASG?
5757
Yes, you can deploy Torchserve on a multinode ASG AWS EC2 cluster. There is a cloud formation template available [here](https://github.com/pytorch/serve/blob/master/cloudformation/ec2-asg.yaml) for this type of deployment. Refer [ Multi-node EC2 deployment behind Elastic LoadBalancer (ELB)](https://github.com/pytorch/serve/tree/master/cloudformation#multi-node-ec2-deployment-behind-elastic-loadbalancer-elb) more details.
5858

5959
### How can I backup and restore Torchserve state?
6060
TorchServe preserves server runtime configuration across sessions such that a TorchServe instance experiencing either a planned or unplanned service stop can restore its state upon restart. These saved runtime configuration files can be used for backup and restore.
61-
Refer [TorchServe model snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md#torchserve-model-snapshot) for more details.
61+
Refer [TorchServe model snapshot](snapshot.md#torchserve-model-snapshot) for more details.
6262

6363
### How can I build a Torchserve image from source?
64-
Torchserve has a utility [script]([https://github.com/pytorch/serve/blob/master/docker/build_image.sh](https://github.com/pytorch/serve/blob/master/docker/build_image.sh)) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
64+
Torchserve has a utility [script](../docker/build_image.sh) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
6565

6666
All these docker images can be created using `build_image.sh` with appropriate options.
6767

6868
Run `./build_image.sh --help` for all availble options.
6969

70-
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image-from-source) for more details.
70+
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image) for more details.
7171

7272
### How to build a Torchserve image for a specific branch or commit id?
7373
To create a Docker image for a specific branch, use the following command:
@@ -84,50 +84,50 @@ The image created using Dockerfile.dev has Torchserve installed from source wher
8484

8585
## API
8686
Relevant documents
87-
- [Torchserve Rest API](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
87+
- [Torchserve Rest API](../docs/model_zoo.md#model-zoo)
8888

8989
### What can I use other than *curl* to make requests to Torchserve?
9090
You can use any tool like Postman, Insomnia or even use a python script to do so. Find sample python script [here](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers).
9191

9292
### How can I add a custom API to an existing framework?
9393
You can add a custom API using **plugins SDK** available in Torchserve.
94-
Refer to [serving sdk](https://github.com/pytorch/serve/blob/master/serving-sdk) and [plugins](https://github.com/pytorch/serve/blob/master/plugins) for more details.
94+
Refer to [serving sdk](../serving-sdk) and [plugins](../plugins) for more details.
9595

9696
### How can pass multiple images in Inference request call to my model?
9797
You can provide multiple data in a single inference request to your custom handler as a key-value pair in the `data` object.
9898
Refer [this](https://github.com/pytorch/serve/issues/529#issuecomment-658012913) for more details.
9999

100100
## Handler
101101
Relevant documents
102-
- [Default handlers](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
103-
- [Custom Handlers](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers)
102+
- [Default handlers](default_handlers.md#torchserve-default-inference-handlers)
103+
- [Custom Handlers](custom_service.md#custom-handlers)
104104

105105
### How do I return an image output for a model?
106106
You would have to write a custom handler with the post processing to return image.
107-
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
107+
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
108108

109109
### How to enhance the default handlers?
110110
Write a custom handler that extends the default handler and just override the methods to be tuned.
111-
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
111+
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
112112

113113
### Do I always have to write a custom handler or are there default ones that I can use?
114114
Yes, you can deploy your model with no-code/ zero code by using builtin default handlers.
115-
Refer [default handlers](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers) for more details.
115+
Refer [default handlers](default_handlers.md#torchserve-default-inference-handlers) for more details.
116116

117117
### Is it possible to deploy Hugging Face models?
118118
Yes, you can deploy Hugging Face models using a custom handler.
119-
Refer [Huggingface_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md) for example.
119+
Refer [Huggingface_Transformers](../examples/Huggingface_Transformers/README.md) for example.
120120

121121
## Model-archiver
122122
Relevant documents
123-
- [Model-archiver ](https://github.com/pytorch/serve/tree/master/model-archiver#torch-model-archiver-for-torchserve)
124-
- [Docker Readme](https://github.com/pytorch/serve/blob/master/docker/README.md)
123+
- [Model-archiver ](../model-archiver/README.md#torch-model-archiver-for-torchserve)
124+
- [Docker Readme](../docker/README.md)
125125

126126
### What is a mar file?
127127
A mar file is a zip file consisting of all model artifacts with the ".mar" extension. The cmd-line utility *torch-model-archiver* is used to create a mar file.
128128

129129
### How can create mar file using Torchserve docker container?
130-
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container).
130+
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](../docker/README.md#create-torch-model-archiver-from-container).
131131

132132
### Can I add multiple serialized files in single mar file?
133133
Currently `TorchModelArchiver` allows supplying only one serialized file with `--serialized-file` parameter while creating the mar. However, you can supply any number and any type of file with `--extra-files` flag. All the files supplied in the mar file are available in `model_dir` location which can be accessed through the context object supplied to the handler's entry point.
@@ -137,7 +137,7 @@ Sample code snippet:
137137
properties = context.system_properties
138138
model_dir = properties.get("model_dir")
139139
```
140-
Refer [Torch model archiver cli](https://github.com/pytorch/serve/blob/master/model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
140+
Refer [Torch model archiver cli](../model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
141141
Relevant issues: [[#633](https://github.com/pytorch/serve/issues/633)]
142142

143143
### Can I download and register model using s3 presigned v4 url?

0 commit comments

Comments
 (0)