Skip to content

Commit 90b0b77

Browse files
committed
docs
1 parent 6156332 commit 90b0b77

File tree

9 files changed

+116
-15
lines changed

9 files changed

+116
-15
lines changed
+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
name: ❓ Questions/Help
3+
about: If you have questions, please first search existing issues and docs
4+
labels: 'question, needs triage'
5+
---
6+
7+
Notice: In order to resolve issues more efficiently, please raise issue following the template.
8+
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
9+
10+
## ❓ Questions and Help
11+
12+
13+
### Before asking:
14+
1. search the issues.
15+
2. search the docs.
16+
17+
<!-- If you still can't find what you need: -->
18+
19+
#### What is your question?
20+
21+
#### Code
22+
23+
<!-- Please paste a code snippet if your question requires it! -->
24+
25+
#### What have you tried?
26+
27+
#### What's your environment?
28+
29+
- OS (e.g., Linux):
30+
- FunASR Version (e.g., 1.0.0):
31+
- ModelScope Version (e.g., 1.11.0):
32+
- PyTorch Version (e.g., 2.0.0):
33+
- How you installed funasr (`pip`, source):
34+
- Python version:
35+
- GPU (e.g., V100M32)
36+
- CUDA/cuDNN version (e.g., cuda11.7):
37+
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
38+
- Any other relevant information:

.github/ISSUE_TEMPLATE/bug_report.md

+47
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
name: 🐛 Bug Report
3+
about: Submit a bug report to help us improve
4+
labels: 'bug, needs triage'
5+
---
6+
7+
Notice: In order to resolve issues more efficiently, please raise issue following the template.
8+
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
9+
10+
## 🐛 Bug
11+
12+
<!-- A clear and concise description of what the bug is. -->
13+
14+
### To Reproduce
15+
16+
Steps to reproduce the behavior (**always include the command you ran**):
17+
18+
1. Run cmd '....'
19+
2. See error
20+
21+
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
22+
23+
24+
#### Code sample
25+
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
26+
Minimal means having the shortest code but still preserving the bug. -->
27+
28+
### Expected behavior
29+
30+
<!-- A clear and concise description of what you expected to happen. -->
31+
32+
### Environment
33+
34+
- OS (e.g., Linux):
35+
- FunASR Version (e.g., 1.0.0):
36+
- ModelScope Version (e.g., 1.11.0):
37+
- PyTorch Version (e.g., 2.0.0):
38+
- How you installed funasr (`pip`, source):
39+
- Python version:
40+
- GPU (e.g., V100M32)
41+
- CUDA/cuDNN version (e.g., cuda11.7):
42+
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
43+
- Any other relevant information:
44+
45+
### Additional context
46+
47+
<!-- Add any other context about the problem here. -->

.github/ISSUE_TEMPLATE/config.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
blank_issues_enabled: false

.github/ISSUE_TEMPLATE/error_docs.md

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
---
2+
name: 📚 Documentation/Typos
3+
about: Report an issue related to documentation or a typo
4+
labels: 'documentation, needs triage'
5+
---
6+
7+
## 📚 Documentation
8+
9+
For typos and doc fixes, please go ahead and:
10+
11+
1. Create an issue.
12+
2. Fix the typo.
13+
3. Submit a PR.
14+
15+
Thanks!

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ model = AutoModel(
121121
res = model.generate(
122122
input=f"{model.model_path}/example/en.mp3",
123123
cache={},
124-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
124+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
125125
use_itn=True,
126126
batch_size_s=60,
127127
merge_vad=True, #
@@ -150,7 +150,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
150150
res = model.generate(
151151
input=f"{model.model_path}/example/en.mp3",
152152
cache={},
153-
language="zh", # "zn", "en", "yue", "ja", "ko", "nospeech"
153+
language="zh", # "zh", "en", "yue", "ja", "ko", "nospeech"
154154
use_itn=False,
155155
batch_size=64,
156156
)
@@ -172,7 +172,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
172172

173173
res = m.inference(
174174
data_in=f"{kwargs['model_path']}/example/en.mp3",
175-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
175+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
176176
use_itn=False,
177177
**kwargs,
178178
)

README_ja.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ model = AutoModel(
121121
res = model.generate(
122122
input=f"{model.model_path}/example/en.mp3",
123123
cache={},
124-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
124+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
125125
use_itn=True,
126126
batch_size_s=60,
127127
merge_vad=True, #
@@ -150,7 +150,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
150150
res = model.generate(
151151
input=f"{model.model_path}/example/en.mp3",
152152
cache={},
153-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
153+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
154154
use_itn=True,
155155
batch_size=64,
156156
)
@@ -172,7 +172,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
172172

173173
res = m.inference(
174174
data_in=f"{kwargs['model_path']}/example/en.mp3",
175-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
175+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
176176
use_itn=False,
177177
**kwargs,
178178
)

README_zh.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ model = AutoModel(
125125
res = model.generate(
126126
input=f"{model.model_path}/example/en.mp3",
127127
cache={},
128-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
128+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
129129
use_itn=True,
130130
batch_size_s=60,
131131
merge_vad=True, #
@@ -154,7 +154,7 @@ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
154154
res = model.generate(
155155
input=f"{model.model_path}/example/en.mp3",
156156
cache={},
157-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
157+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
158158
use_itn=True,
159159
batch_size=64,
160160
)
@@ -176,7 +176,7 @@ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
176176

177177
res = m.inference(
178178
data_in=f"{kwargs['model_path']}/example/en.mp3",
179-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
179+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
180180
use_itn=False,
181181
**kwargs,
182182
)

demo1.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
res = model.generate(
2323
input=f"{model.model_path}/example/en.mp3",
2424
cache={},
25-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
25+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
2626
use_itn=True,
2727
batch_size_s=60,
2828
merge_vad=True, #
@@ -35,7 +35,7 @@
3535
res = model.generate(
3636
input=f"{model.model_path}/example/zh.mp3",
3737
cache={},
38-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
38+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
3939
use_itn=True,
4040
batch_size_s=60,
4141
merge_vad=True, #
@@ -48,7 +48,7 @@
4848
res = model.generate(
4949
input=f"{model.model_path}/example/yue.mp3",
5050
cache={},
51-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
51+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
5252
use_itn=True,
5353
batch_size_s=60,
5454
merge_vad=True, #
@@ -61,7 +61,7 @@
6161
res = model.generate(
6262
input=f"{model.model_path}/example/ja.mp3",
6363
cache={},
64-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
64+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
6565
use_itn=True,
6666
batch_size_s=60,
6767
merge_vad=True, #
@@ -75,7 +75,7 @@
7575
res = model.generate(
7676
input=f"{model.model_path}/example/ko.mp3",
7777
cache={},
78-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
78+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
7979
use_itn=True,
8080
batch_size_s=60,
8181
merge_vad=True, #

demo2.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
res = m.inference(
1515
data_in=f"{kwargs['model_path']}/example/en.mp3",
16-
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
16+
language="auto", # "zh", "en", "yue", "ja", "ko", "nospeech"
1717
use_itn=False,
1818
**kwargs,
1919
)

0 commit comments

Comments
 (0)