ASNet: Introducing Approximate Hardware to High-Level Synthesis of Neural Networks
Sprache des Titels:
IEEE International Symposium on Multiple-Valued Logic (ISMVL)
Approximate Computing is a design paradigm which makes use of error tolerance inherent to many applications in order to trade off accuracy for performance. One classic example for such an application is machine learning with Neural Networks (NNs). Recently, LeFlow, a High-Level Synthesis (HLS) flow for mapping Tensorflow NNs into hardware has been proposed. The main steps of LeFlow are to compile the Tensorflow models into the LLVM Intermediate Representation (IR), perform several transformations and feed the result into a HLS tool. In this work we take HLS-based NN synthesis one step further by integrating hardware approximation. To achieve this goal, we upgrade LeFlow such that (a) the user can specify hardware approximations, and (b) the user can analyze the impact of hardware approximation already at the SW level. Based on the exploration results which satisfy the NN quality expectations, we import the chosen approx. HW components into an extended version of the HLS tool to finally synthesize the NN to Verilog. The experimental evaluation demonstrates the advantages of our proposed ASNet for several NNs. Significant area reductions as well as improvements in operation frequency are achieved.