Abstract:
This thesis introduces the Colored Petri Neural Network (CPNN), a novel frame- work that integrates Colored Petri Nets (CPNs) with multi-layer perceptrons (MLPs) to enhance the interpretability of neural networks. The CPNN model addresses the challenge of explainability in deep learning by enabling formal, fine-grained tracking of information flow during forward propagation. This approach provides transparent insights into feature contributions and decision-making processes.
By leveraging the formal verification strengths of CPNs, the model supports rigorous analysis without compromising predictive performance—particularly in critical domains such as healthcare. Additionally, a mathematical investigation of the neural network hyperparameters effects on state space complexity reveals the influence of factors like layer depth and mini-batch size on computational requirements, guiding more efficient design and verification.
This work lays the foundation for developing interpretable, efficient, and verifiable deep learning systems in critical applications.