You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-16
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,13 @@
1
-
# AI Verification: Constrained Deep Learning[](https://matlab.mathworks.com/open/github/v1?repo=matlab-deep-learning/constrained-deep-learning)
1
+
# AI Verification: Constrained Deep Learning
2
2
3
3
Constrained deep learning is an advanced approach to training deep neural networks by incorporating domain-specific constraints into the learning process. By integrating these constraints into the construction and training of neural networks, you can guarantee desirable behaviour in safety-critical scenarios where such guarantees are paramount.
4
4
5
5
This project aims to develop and evaluate deep learning models that adhere to predefined constraints, which could be in the form of physical laws, logical rules, or any other domain-specific knowledge. In the context of AI verification, constrained deep learning provides guarantees that certain desirable properties are present in the trained neural network by design. These desirable properties could include monotonicity, boundedness, and robustness amongst others.
@@ -32,12 +33,12 @@ The repository contains several introductory, interactive examples as well as lo
32
33
33
34
### Introductory Examples (Short)
34
35
Below are links for markdown versions of MATLAB Live Scripts that you can view in GitHub®.
35
-
-[Fully Input Convex Neural Networks in 1-Dimension](examples/convex/introductory/PoC_Ex1_1DFICNN.md)
36
-
-[Fully Input Convex Neural Networks in n-Dimensions](examples/convex/introductory/PoC_Ex2_nDFICNN.md)
37
-
-[Partially Input Convex Neural Networks in n-Dimensions](examples/convex/introductory/PoC_Ex3_nDPICNN.md)
38
-
-[Fully Input Monotonic Neural Networks in 1-Dimension](examples/monotonic/introductory/PoC_Ex1_1DFMNN.md)
39
-
-[Fully Input Monotonic Neural Networks in n-Dimensions](examples/monotonic/introductory/PoC_Ex2_nDFMNN.md)
40
-
-[Lipschitz Continuous Neural Networks in 1-Dimension](examples/lipschitz/introductory/PoC_Ex1_1DLNN.md)
36
+
-[Fully input convex neural networks in 1-dimension](examples/convex/introductory/PoC_Ex1_1DFICNN.md)
37
+
-[Fully input convex neural networks in n-dimensions](examples/convex/introductory/PoC_Ex2_nDFICNN.md)
38
+
-[Partially input convex neural networks in n-dimensions](examples/convex/introductory/PoC_Ex3_nDPICNN.md)
39
+
-[Fully input monotonic neural networks in 1-dimension](examples/monotonic/introductory/PoC_Ex1_1DFMNN.md)
40
+
-[Fully input monotonic neural networks in n-dimensions](examples/monotonic/introductory/PoC_Ex2_nDFMNN.md)
41
+
-[Lipschitz continuous neural networks in 1-dimensions](examples/lipschitz/introductory/PoC_Ex1_1DLNN.md)
41
42
42
43
These examples make use of [custom training loops](https://uk.mathworks.com/help/deeplearning/deep-learning-custom-training-loops.html) and the [`arrayDatastore`](https://uk.mathworks.com/help/matlab/ref/matlab.io.datastore.arraydatastore.html) object. To learn more, click the links.
43
44
@@ -70,13 +71,7 @@ As discussed in [1] (see 3.4.1.5), in certain situations, small violations in th
70
71
71
72
## Technical Articles
72
73
73
-
This repository focuses on the development and evaluation of deep learning models that adhere to constraints crucial for safety-critical applications, such as predictive maintenance for industrial machinery and equipment. Specifically, it focuses on enforcing monotonicity, convexity, and Lipschitz continuity within neural networks to ensure predictable and controlled behavior.
74
-
75
-
By emphasizing constraints like monotonicity, constrained neural networks ensure that predictions of the Remaining Useful Life (RUL) of components behave intuitively: as a machine's condition deteriorates, the estimated RUL should monotonically decrease. This is crucial in applications like aerospace or manufacturing, where an accurate and reliable estimation of RUL can prevent failures and save costs.
76
-
77
-
Alongside monotonicity, Lipschitz continuity is also enforced to guarantee model robustness and controlled behavior. This is essential in environments where safety and precision are paramount such as control systems in autonomous vehicles or precision equipment in healthcare.
78
-
79
-
Convexity is especially beneficial for control systems as it inherently provides boundedness properties. For instance, by ensuring that the output of a neural network lies within a convex hull, it is possible to guarantee that the control commands remain within a safe and predefined operational space, preventing erratic or unsafe system behaviors. This boundedness property, derived from the convex nature of the model's output space, is critical for maintaining the integrity and safety of control systems under various conditions.
74
+
This repository focuses on the development and evaluation of deep learning models that adhere to constraints crucial for safety-critical applications, such as predictive maintenance for industrial machinery and equipment. Specifically, it focuses on enforcing monotonicity, convexity, and Lipschitz continuity within neural networks to ensure predictable and controlled behavior. By emphasizing constraints like monotonicity, constrained neural networks ensure that predictions of the Remaining Useful Life (RUL) of components behave intuitively: as a machine's condition deteriorates, the estimated RUL should monotonically decrease. This is crucial in applications like aerospace or manufacturing, where an accurate and reliable estimation of RUL can prevent failures and save costs. Alongside monotonicity, Lipschitz continuity is also enforced to guarantee model robustness and controlled behavior. This is essential in environments where safety and precision are paramount such as control systems in autonomous vehicles or precision equipment in healthcare. Convexity is especially beneficial for control systems as it inherently provides boundedness properties. For instance, by ensuring that the output of a neural network lies within a convex hull, it is possible to guarantee that the control commands remain within a safe and predefined operational space, preventing erratic or unsafe system behaviors. This boundedness property, derived from the convex nature of the model's output space, is critical for maintaining the integrity and safety of control systems under various conditions.
80
75
81
76
These technical articles explain key concepts of AI verification in the context of constrained deep learning. They include discussions on how to achieve the specified constraints in neural networks at construction and training time, as well as deriving and proving useful properties of constrained networks in AI verification applications. It is not necessary to go through these articles in order to explore this repository, however, you can find references and more in depth discussion here.
82
77
@@ -90,4 +85,4 @@ These technical articles explain key concepts of AI verification in the context
90
85
-[3] Gouk, Henry, et al. “Regularisation of Neural Networks by Enforcing Lipschitz Continuity.” Machine Learning, vol. 110, no. 2, Feb. 2021, pp. 393–416. DOI.org (Crossref), https://doi.org/10.1007/s10994-020-05929-w
91
86
-[4] Kitouni, Ouail, et al. Expressive Monotonic Neural Networks. arXiv:2307.07512, arXiv, 14 July 2023. arXiv.org, http://arxiv.org/abs/2307.07512.
0 commit comments