The new Half type is composed of 16 bits and will be geared towards speeding up machine learning workflows by enabling faster computation and smaller storage requirements at the expense of precision.
Most AI chips and hardware accelerators that power machine learning (ML) and deep learning (DL) applications include floating-point units (FPUs). Algorithms used in neural networks today are often ...