DCLOG: Don't Cares-based Logic Optimization using Pre-training Graph Neural Networks

Abstract

Logic rewriting serves as a robust optimization technique that enhances Boolean networks by substituting small segments with more effective implementations. The incorporation of don’t cares in this process often yields superior optimization results. Nevertheless, the calculation of don’t cares within a Boolean network can be resource-intensive. Therefore, it is crucial to develop effective strategies that mitigate the computational costs associated with don’t cares while simultaneously facilitating the exploration of improved optimization outcomes. To address these challenges, this paper proposes DCLOG, a don’t cares-based logic optimization framework, to efficiently and effectively optimize a given Boolean network. DCLOG leverages a pre-trained graph neural network model to filter out cuts without don’t cares and then performs an incremental window simulation to calculate don’t cares for each cut. Experimental results demonstrate the effectiveness and efficiency of DCLOG on large Boolean networks, specifically average size reductions of 15.64% and 1.44% while requiring less than 23.84% and 44.70% of the average runtime compared with state-of-the-art methods for the majority-inverter graph (MIG), respectively.

Publication
Proceedings of the 31st Asia and South Pacific Design Automation Conference