Skip to content

Commit 40ccad1

Browse files
author
Diana Gamez
committed
New convolutional neural network model using dropout and batch normalization techniques
1 parent 89bd765 commit 40ccad1

File tree

7 files changed

+194
-20
lines changed

7 files changed

+194
-20
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,4 @@ venv/
1313
output/
1414
emopy_venv/
1515
EmoPy/examples/image_data/image.jpg
16+
EmoPy/examples/image_data/fer2013.csv

EmoPy.egg-info/PKG-INFO

+46-16
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
11
Metadata-Version: 2.1
22
Name: EmoPy
3-
Version: 0.0.4
3+
Version: 0.0.5
44
Summary: A deep neural net toolkit for emotion analysis via Facial Expression Recognition (FER)
55
Home-page: https://github.com/thoughtworksarts/EmoPy
66
Author: ThoughtWorks Arts
77
Author-email: [email protected]
88
License: UNKNOWN
99
Description: # EmoPy
10-
EmoPy is a python toolkit with deep neural net classes which accurately predict emotions given images of people's faces.
10+
EmoPy is a python toolkit with deep neural net classes which aims to make accurate predictions of emotions given images of people's faces.
1111

1212
![Labeled FER Images](readme_docs/labeled_images.png "Labeled Facial Expression Images")
1313
*Figure from [@Chen2014FacialER]*
1414

15-
The aim of this project is to make accurate [Facial Expression Recognition (FER)](https://en.wikipedia.org/wiki/Emotion_recognition) models free, open, easy to use, and easy to integrate into different projects.
15+
The goal of this project is to explore the field of [Facial Expression Recognition (FER)](https://en.wikipedia.org/wiki/Emotion_recognition) using existing public datasets, and make neural network models which are free, open, easy to research, and easy to integrate into different projects. The behavior of the system is highly dependent on the available data, and the developers of EmoPy created and tested the system using only publicly-available datasets.
1616

17-
The developers of EmoPy have written two guides you may find useful:
17+
To get a better grounding in the project you may find these write-ups useful:
1818
* [Recognizing human facial expressions with machine learning](https://www.thoughtworks.com/insights/blog/recognizing-human-facial-expressions-machine-learning)
1919
* [EmoPy: a machine learning toolkit for emotional expression](https://www.thoughtworks.com/insights/blog/emopy-machine-learning-toolkit-emotional-expression)
2020

21-
We aim to expand our development community, and we are open to suggestions and contributions. Please [contact us](mailto:[email protected]) to discuss.
21+
We aim to expand our development community, and we are open to suggestions and contributions. Usually these types of algorithms are used commercially, so we want to help open source the best possible version of them in order to improve public access and engagement in this area. Please [contact us](mailto:[email protected]) to discuss.
2222

2323
## Overview
2424

@@ -32,27 +32,38 @@ Description: # EmoPy
3232
- `directory_data_loader.py`
3333
- `data_generator.py`
3434

35-
The `fermodel.py` module uses pretrained models for FER prediction, making it the easiest entry point to get a trained model up and running quickly.
35+
The `fermodel.py` module uses pre-trained models for FER prediction, making it the easiest entry point to get a trained model up and running quickly.
3636

3737
Each of the modules contains one class, except for `neuralnets.py`, which has one interface and four subclasses. Each of these subclasses implements a different neural net architecture using the Keras framework with Tensorflow backend, allowing you to experiment and see which one performs best for your needs.
3838

3939
The [EmoPy documentation](https://emopy.readthedocs.io/) contains detailed information on the classes and their interactions. Also, an overview of the different neural nets included in this project is included below.
4040

41-
## Datasets
41+
## Operating Constraints
4242

43-
Try out the system using your own dataset or a small dataset we have provided in the [examples/image_data](examples/image_data) subdirectory. The sample datasets we provide will not yield good results due to their small size, but they serve as a great way to get started.
43+
Commercial FER projects are regularly trained on millions of labeled images, in massive private datasets. By contrast, in order to remain free and open source, EmoPy was created to work with only public datasets, which presents a major constraint on training for accurate results.
4444

45-
Predictions ideally perform well on a diversity of datasets, illumination conditions, and subsets of the standard 7 emotion labels (happiness, anger, fear, surprise, disgust, sadness, calm/neutral) seen in FER research. Some good example public datasets are the [Extended Cohn-Kanade](http://www.consortium.ri.cmu.edu/ckagree/) and [FER+](https://github.com/Microsoft/FERPlus).
45+
EmoPy was originally created and designed to fulfill the needs of the [RIOT project](https://thoughtworksarts.io/projects/riot/), in which audience members facial expressions are recorded in a controlled lighting environment.
4646

47-
## Environment Setup
47+
For these two reasons, EmoPy functions best when the input image:
4848

49-
EmoPy runs using Python 3.6, theoretically on any Python-compatible OS. We tested EmoPy using Python 3.6.6 on OSX. You can install [Python 3.6.6](https://www.python.org/downloads/release/python-366/) from the Python website.
49+
* is evenly lit, with relatively few shadows, and/or
50+
* matches to some extent the style, framing and cropping of images from the training dataset
5051

51-
Please note that this is not the most current version of Python, but the TensorFlow package doesn't work with Python 3.7 yet, so EmoPy cannot run with Python 3.7.
52+
As of this writing, the best available public dataset we have found is [Microsoft FER+](https://github.com/Microsoft/FERPlus), with around 30,000 images. Training on this dataset should yield best results when the input image relates to some extent to the style of the images in the set.
5253

53-
Python is compatible with multiple operating systems. If you would like to use EmoPy on another OS, please convert these instructions to match your target environment. Let us know how you get on, and we will try to support you and share you results.
54+
For a deeper analysis of the origin and operation of EmoPy, which will be useful to help evaluate its potential for your needs, please read our [full write-up on EmoPy](https://www.thoughtworks.com/insights/blog/emopy-machine-learning-toolkit-emotional-expression).
55+
56+
## Choosing a Dataset
57+
58+
Try out the system using your own dataset or a small dataset we have provided in the [Emopy/examples/image_data](Emopy/examples/image_data) subdirectory. The sample datasets we provide will not yield good results due to their small size, but they serve as a great way to get started.
59+
60+
Predictions ideally perform well on a diversity of datasets, illumination conditions, and subsets of the standard 7 emotion labels (happiness, anger, fear, surprise, disgust, sadness, calm/neutral) seen in FER research. Some good example public datasets are the [Extended Cohn-Kanade](http://www.consortium.ri.cmu.edu/ckagree/) and [Microsoft FER+](https://github.com/Microsoft/FERPlus).
61+
62+
## Environment Setup
5463

64+
EmoPy runs using Python 3.6 and up, theoretically on any Python-compatible OS. We tested EmoPy using Python 3.6.6 on OSX. You can install [Python 3.6.6](https://www.python.org/downloads/release/python-366/) from the Python website.
5565

66+
Python is compatible with multiple operating systems. If you would like to use EmoPy on another OS, please convert these instructions to match your target environment. Let us know how you get on, and we will try to support you and share you results.
5667

5768
If you do not have Homebrew installed run this command to install:
5869

@@ -75,7 +86,7 @@ Description: # EmoPy
7586
```
7687
python3.6 -m venv venv
7788
```
78-
where the second `venv` is the name of your virtual environment. To activate, run from the same directory:
89+
where the second `venv` is the name of your virtual environment. To activate, run from the same directory:
7990
```
8091
source venv/bin/activate
8192
```
@@ -112,15 +123,15 @@ Description: # EmoPy
112123

113124
## Running the examples
114125

115-
You can find example code to run each of the current neural net classes in [examples](examples). You may either download the example directory to a location of your choice on your machine, or find the example directory included in the installation.
126+
You can find example code to run each of the current neural net classes in [examples](EmoPy/examples). You may either download the example directory to a location of your choice on your machine, or find the example directory included in the installation.
116127

117128
If you choose to use the installed package, you can find the examples directory by starting in the virtual environment directory you created and typing:
118129
```
119130
cd lib/python3.6/site-packages/EmoPy/examples
120131
```
121132

122133

123-
The best place to start is the [FERModel example](examples/fermodel_example.py). Here is a listing of that code:
134+
The best place to start is the [FERModel example](EmoPy/examples/fermodel_example.py). Here is a listing of that code:
124135

125136
```python
126137
from EmoPy.src.fermodel import FERModel
@@ -233,6 +244,25 @@ Description: # EmoPy
233244

234245
[@vanGent2016]: http://www.paulvangent.com/2016/04/01/emotion-recognition-with-python-opencv-and-a-face-dataset/ "Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics."
235246

247+
## Contributors
248+
249+
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
250+
251+
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
252+
<!-- prettier-ignore -->
253+
| [<img src="https://avatars1.githubusercontent.com/u/11094785?v=4" width="100px;" alt="angelicaperez37"/><br /><sub><b>angelicaperez37</b></sub>](https://github.com/angelicaperez37)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=angelicaperez37 "Code") [📝](#blog-angelicaperez37 "Blogposts") [📖](https://github.com/thoughtworksarts/EmoPy/commits?author=angelicaperez37 "Documentation") | [<img src="https://avatars0.githubusercontent.com/u/19356750?v=4" width="100px;" alt="sbriley"/><br /><sub><b>sbriley</b></sub>](https://github.com/sbriley)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=sbriley "Code") | [<img src="https://avatars0.githubusercontent.com/u/1446811?v=4" width="100px;" alt="Sofia Tania"/><br /><sub><b>Sofia Tania</b></sub>](http://tania.pw)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=stania1 "Code") | [<img src="https://avatars2.githubusercontent.com/u/756527?v=4" width="100px;" alt="Andrew McWilliams"/><br /><sub><b>Andrew McWilliams</b></sub>](https://jahya.net)<br />[📖](https://github.com/thoughtworksarts/EmoPy/commits?author=microcosm "Documentation") [🤔](#ideas-microcosm "Ideas, Planning, & Feedback") | [<img src="https://avatars2.githubusercontent.com/u/10860893?v=4" width="100px;" alt="Webs"/><br /><sub><b>Webs</b></sub>](http://www.websonthewebs.com)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=weberswords "Code") | [<img src="https://avatars0.githubusercontent.com/u/14201413?v=4" width="100px;" alt="Sara GW"/><br /><sub><b>Sara GW</b></sub>](https://github.com/saragw6)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=saragw6 "Code") | [<img src="https://avatars1.githubusercontent.com/u/3609989?v=4" width="100px;" alt="Megan Sullivan"/><br /><sub><b>Megan Sullivan</b></sub>](http://www.linkedin.com/in/meganesu)<br />[📖](https://github.com/thoughtworksarts/EmoPy/commits?author=meganesu "Documentation") |
254+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
255+
| [<img src="https://avatars0.githubusercontent.com/u/33844894?v=4" width="100px;" alt="sadnantw"/><br /><sub><b>sadnantw</b></sub>](https://github.com/sadnantw)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=sadnantw "Code") [⚠️](https://github.com/thoughtworksarts/EmoPy/commits?author=sadnantw "Tests") | [<img src="https://avatars3.githubusercontent.com/u/192539?v=4" width="100px;" alt="Julien Deswaef"/><br /><sub><b>Julien Deswaef</b></sub>](http://xuv.be)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=xuv "Code") [📖](https://github.com/thoughtworksarts/EmoPy/commits?author=xuv "Documentation") | [<img src="https://avatars0.githubusercontent.com/u/10910251?v=4" width="100px;" alt="Tanushri Chakravorty"/><br /><sub><b>Tanushri Chakravorty</b></sub>](https://github.com/sinbycos)<br />[💻](https://github.com/thoughtworksarts/EmoPy/commits?author=sinbycos "Code") [💡](#example-sinbycos "Examples") | [<img src="https://avatars2.githubusercontent.com/u/94368?v=4" width="100px;" alt="Linas Vepštas"/><br /><sub><b>Linas Vepštas</b></sub>](http://linas.org)<br />[🔌](#plugin-linas "Plugin/utility libraries") |
256+
<!-- ALL-CONTRIBUTORS-LIST:END -->
257+
258+
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
259+
260+
## Projects built on EmoPy
261+
- [RIOT AI](http://karenpalmer.uk/portfolio/riot/)
262+
- [ROS wrapper for EmoPy](https://github.com/hansonrobotics/ros_emopy)
263+
264+
Want to list you project here? Please file an [issue](issues/new) (or pull request) and tell us how EmoPy is helping you.
265+
236266
Platform: UNKNOWN
237267
Classifier: Programming Language :: Python :: 3.6
238268
Classifier: Operating System :: MacOS :: MacOS X

EmoPy.egg-info/SOURCES.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
.DS_Store
21
.gitignore
32
LICENSE
43
MANIFEST.in
@@ -12,6 +11,7 @@ EmoPy.egg-info/dependency_links.txt
1211
EmoPy.egg-info/requires.txt
1312
EmoPy.egg-info/top_level.txt
1413
EmoPy/examples/__init__.py
14+
EmoPy/examples/convolutional_dropout_model.py
1515
EmoPy/examples/convolutional_lstm_model.py
1616
EmoPy/examples/convolutional_model.py
1717
EmoPy/examples/fermodel_example.py

EmoPy.egg-info/requires.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ numpy<=1.14.5,>=1.13.3
66
scikit-image>=0.13.1
77
scikit-learn>=0.19.1
88
scikit-neuralnetwork>=0.7
9-
scipy>=0.19.1
9+
scipy==1.0.0
1010
tensorflow>=1.10.1
1111
opencv-python
1212
h5py
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
from EmoPy.src.fermodel import FERModel
2+
from EmoPy.src.directory_data_loader import DirectoryDataLoader
3+
from EmoPy.src.csv_data_loader import CSVDataLoader
4+
from EmoPy.src.data_generator import DataGenerator
5+
from EmoPy.src.neuralnets import ConvolutionalNNDropout
6+
from sklearn.model_selection import train_test_split
7+
import numpy as np
8+
9+
from pkg_resources import resource_filename,resource_exists
10+
11+
validation_split = 0.15
12+
13+
target_dimensions = (48, 48)
14+
channels = 1
15+
verbose = True
16+
17+
#fer_dataset_label_map = {'0': 'anger', '1' : 'disgust', '2': 'fear', '3' : 'happiness', '4' : 'sadness', '5' : 'surprise', '6' : 'calm'}
18+
fer_dataset_label_map = {'0': 'anger', '1' : 'disgust', '2': 'fear', '3' : 'happiness'}
19+
20+
print('--------------- Convolutional Dropout Model -------------------')
21+
print('Loading data...')
22+
csv_file_path = resource_filename('EmoPy.examples','image_data/fer2013.csv')
23+
data_loader = CSVDataLoader(target_emotion_map=fer_dataset_label_map, datapath=csv_file_path, validation_split=validation_split,
24+
image_dimensions=target_dimensions, csv_label_col=0, csv_image_col=1, out_channels=1)
25+
dataset = data_loader.load_data()
26+
27+
28+
if verbose:
29+
dataset.print_data_details()
30+
31+
print('Preparing training/testing data...')
32+
train_images, train_labels = dataset.get_training_data()
33+
train_gen = DataGenerator().fit(train_images, train_labels)
34+
test_images, test_labels = dataset.get_test_data()
35+
test_gen = DataGenerator().fit(test_images, test_labels)
36+
37+
X_train, X_valid, y_train, y_valid = train_test_split(train_images, train_labels, test_size=0.1, random_state=41)
38+
39+
print('Training net...')
40+
model = ConvolutionalNNDropout(target_dimensions, channels, dataset.get_emotion_index_map(), verbose=True)
41+
model.fit_generator(train_gen.generate(target_dimensions, batch_size=5),
42+
test_gen.generate(target_dimensions, batch_size=5),
43+
epochs=15)
44+
#model.fit_generator(train_gen.generate(target_dimensions, batch_size=5),validation_data=(np.array(X_valid), np.array(y_valid)),
45+
#epochs=100)
46+
47+
# Save model configuration
48+
# model.export_model('output/conv2d_model.json','output/conv2d_weights.h5',"output/conv2d_emotion_map.json", emotion_map)

EmoPy/src/data_generator.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ def __init__(self, time_delay=None):
1212
self.labels = None
1313
self.config_augmentation(time_delay=time_delay)
1414

15-
def config_augmentation(self, zca_whitening=False, rotation_angle=90, shift_range=0.2, horizontal_flip=True,
15+
def config_augmentation(self, zca_whitening=False, rotation_angle=10, shift_range=0.1, zoom_range=0.1, horizontal_flip=True,
1616
time_delay=None):
1717
self.data_gen = ImageDataGenerator(featurewise_center=True,
1818
featurewise_std_normalization=True,
@@ -21,6 +21,7 @@ def config_augmentation(self, zca_whitening=False, rotation_angle=90, shift_rang
2121
width_shift_range=shift_range,
2222
height_shift_range=shift_range,
2323
horizontal_flip=horizontal_flip,
24+
zoom_range=zoom_range,
2425
time_delay=time_delay)
2526
return self
2627

0 commit comments

Comments
 (0)