Skip to content

Commit 67f92ce

Browse files
committed
Weed Classification
1 parent 60444bc commit 67f92ce

11 files changed

+239
-0
lines changed

Weed Classification/Dataset/README.md

+119
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
# Weed Classification using DL
2+
3+
## PROJECT TITLE
4+
5+
Weed Detection using Deep Learning
6+
7+
## GOAL
8+
9+
To identify the weed image.
10+
11+
## DATASET
12+
13+
The link for the dataset used in this project: https://www.kaggle.com/datasets/imsparsh/deepweeds
14+
It has 9 classes of Classification
15+
16+
## EDA:
17+
![EDA](../Images/EDA1.png)
18+
![Dataset Sample](../Images/Input.png)
19+
20+
## DESCRIPTION
21+
22+
This project aims to identify the weed name using Deep Learning.
23+
24+
## WHAT I HAD DONE
25+
26+
1. Data collection: From the link of the dataset given above using TensorflowDataset.
27+
2. Data preprocessing: Preprocessed the image according to the requirement of the model.
28+
3. Model selection: Densenet and Mobilnet V2 with a added Dense Classification Layer
29+
4. Comparative analysis: Compared the accuracy score of all the models.
30+
31+
32+
## MODELS SUMMARY
33+
34+
Model: "model" Densenet
35+
__________________________________________________________________________________________________
36+
Layer (type) Output Shape Param # Connected to
37+
==================================================================================================
38+
input_1 (InputLayer) [(None, 224, 224, 3 0 []
39+
)]
40+
41+
zero_padding2d (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]']
42+
43+
conv1/conv (Conv2D) (None, 112, 112, 64 9408 ['zero_padding2d[0][0]']
44+
)
45+
46+
conv1/bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1/conv[0][0]']
47+
)
48+
49+
conv1/relu (Activation) (None, 112, 112, 64 0 ['conv1/bn[0][0]']
50+
)
51+
52+
zero_padding2d_1 (ZeroPadding2 (None, 114, 114, 64 0 ['conv1/relu[0][0]']
53+
D) )
54+
55+
pool1 (MaxPooling2D) (None, 56, 56, 64) 0 ['zero_padding2d_1[0][0]']
56+
57+
conv2_block1_0_bn (BatchNormal (None, 56, 56, 64) 256 ['pool1[0][0]']
58+
ization)
59+
...
60+
Total params: 7,333,961
61+
Trainable params: 380,105
62+
Non-trainable params: 6,953,856
63+
64+
Model: "sequential_1" Mobilenet
65+
_________________________________________________________________
66+
Layer (type) Output Shape Param #
67+
=================================================================
68+
mobilenetv2_1.00_224 (Funct (None, 8, 8, 1280) 2257984
69+
ional)
70+
71+
global_average_pooling2d (G (None, 1280) 0
72+
lobalAveragePooling2D)
73+
74+
dense_3 (Dense) (None, 256) 327936
75+
76+
dropout_1 (Dropout) (None, 256) 0
77+
78+
dense_4 (Dense) (None, 9) 2313
79+
80+
=================================================================
81+
Total params: 2,588,233
82+
Trainable params: 330,249
83+
Non-trainable params: 2,257,984
84+
_________________________________________________________________
85+
86+
## LIBRARIES NEEDED
87+
88+
The following libraries are required to run this project:
89+
90+
- matplotlib
91+
- tensorflow
92+
- keras
93+
- PIL
94+
95+
## EVALUATION METRICS
96+
97+
The evaluation metrics I used to assess the models:
98+
99+
- Accuracy
100+
- Loss
101+
- Confusion Matrix
102+
103+
It is shown using Confusion Matrix in the Images folder
104+
105+
## RESULTS
106+
Results on Val dataset:
107+
For Mobilnet:
108+
Accuracy:83%
109+
loss: 0.47
110+
111+
For Model-2:
112+
Accuracy:70%
113+
loss: 0.82
114+
115+
116+
## CONCLUSION
117+
Based on results we can draw following conclusions:
118+
119+
1.The densenet model worked better than the mobilenet model
29.6 KB
Loading
49.8 KB
Loading

Weed Classification/Images/EDA1.png

9.06 KB
Loading

Weed Classification/Images/Input.png

180 KB
Loading
21.4 KB
Loading
48.4 KB
Loading
17.3 KB
Loading
7.56 KB
Loading

Weed Classification/Model/weed-classification.ipynb

+1
Large diffs are not rendered by default.

Weed Classification/README.md

+119
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
# Weed Classification using DL
2+
3+
## PROJECT TITLE
4+
5+
Weed Detection using Deep Learning
6+
7+
## GOAL
8+
9+
To identify the weed image.
10+
11+
## DATASET
12+
13+
The link for the dataset used in this project: https://www.kaggle.com/datasets/imsparsh/deepweeds
14+
It has 9 classes of Classification
15+
16+
## EDA:
17+
![EDA](Images/EDA1.png)
18+
![Dataset Sample](Images/Input.png)
19+
20+
## DESCRIPTION
21+
22+
This project aims to identify the weed name using Deep Learning.
23+
24+
## WHAT I HAD DONE
25+
26+
1. Data collection: From the link of the dataset given above using TensorflowDataset.
27+
2. Data preprocessing: Preprocessed the image according to the requirement of the model.
28+
3. Model selection: Densenet and Mobilnet V2 with a added Dense Classification Layer
29+
4. Comparative analysis: Compared the accuracy score of all the models.
30+
31+
32+
## MODELS SUMMARY
33+
34+
Model: "model" Densenet
35+
__________________________________________________________________________________________________
36+
Layer (type) Output Shape Param # Connected to
37+
==================================================================================================
38+
input_1 (InputLayer) [(None, 224, 224, 3 0 []
39+
)]
40+
41+
zero_padding2d (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]']
42+
43+
conv1/conv (Conv2D) (None, 112, 112, 64 9408 ['zero_padding2d[0][0]']
44+
)
45+
46+
conv1/bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1/conv[0][0]']
47+
)
48+
49+
conv1/relu (Activation) (None, 112, 112, 64 0 ['conv1/bn[0][0]']
50+
)
51+
52+
zero_padding2d_1 (ZeroPadding2 (None, 114, 114, 64 0 ['conv1/relu[0][0]']
53+
D) )
54+
55+
pool1 (MaxPooling2D) (None, 56, 56, 64) 0 ['zero_padding2d_1[0][0]']
56+
57+
conv2_block1_0_bn (BatchNormal (None, 56, 56, 64) 256 ['pool1[0][0]']
58+
ization)
59+
...
60+
Total params: 7,333,961
61+
Trainable params: 380,105
62+
Non-trainable params: 6,953,856
63+
64+
Model: "sequential_1" Mobilenet
65+
_________________________________________________________________
66+
Layer (type) Output Shape Param #
67+
=================================================================
68+
mobilenetv2_1.00_224 (Funct (None, 8, 8, 1280) 2257984
69+
ional)
70+
71+
global_average_pooling2d (G (None, 1280) 0
72+
lobalAveragePooling2D)
73+
74+
dense_3 (Dense) (None, 256) 327936
75+
76+
dropout_1 (Dropout) (None, 256) 0
77+
78+
dense_4 (Dense) (None, 9) 2313
79+
80+
=================================================================
81+
Total params: 2,588,233
82+
Trainable params: 330,249
83+
Non-trainable params: 2,257,984
84+
_________________________________________________________________
85+
86+
## LIBRARIES NEEDED
87+
88+
The following libraries are required to run this project:
89+
90+
- matplotlib
91+
- tensorflow
92+
- keras
93+
- PIL
94+
95+
## EVALUATION METRICS
96+
97+
The evaluation metrics I used to assess the models:
98+
99+
- Accuracy
100+
- Loss
101+
- Confusion Matrix
102+
103+
It is shown using Confusion Matrix in the Images folder
104+
105+
## RESULTS
106+
Results on Val dataset:
107+
For Mobilnet:
108+
Accuracy:83%
109+
loss: 0.47
110+
111+
For Model-2:
112+
Accuracy:70%
113+
loss: 0.82
114+
115+
116+
## CONCLUSION
117+
Based on results we can draw following conclusions:
118+
119+
1.The densenet model worked better than the mobilenet model

0 commit comments

Comments
 (0)