Deep convolution Neural Network (DCNN) has been widely used in computer vision tasks. However, for edge device, even then inference has too large computational complexity and data access amount. Due to the mentioned shortcomings, the inference latency of state-of-the-art models are still impractical for real-world applications. In this paper, we proposed a high utilization energy-aware real-time inference deep convolutional neural network accelerator, which outperforms the current accelerators. First, we use 1x1 size convolution kernels as the smallest unit of the computing unit. And we design suitable computing unit for different models based on the requirement of each model. Second, we use Reuse Feature SRAM to store the output of current layer in the chip and use as the input of the next layer. Moreover, we import Output Reuse Strategy and Ring Stream Data flow not only to expand the reuse rate of data in the chip but to reduce the amount of data exchange between chips and DRAM. Finally, we present On-fly Pooling Module to let the calculation of the Pooling layer to be completed directly in the chip. With the aid of the proposed method in this paper, the implemented CNN acceleration chip has extreme high hardware utilization rate. We reduce a generous amount of data transfer on the specific module, ECNN [1]. Compared to the methods without reuse strategy, we can reduce 533 times of data access amount. At the same time, we have enough computing power to perform real-time execution of the existing image classification model, VGG16 [2] and MobileNet [3]. Compared with the design in [4], we can speed up 7.52 times and have 1.92x energy efficiency.