The AI research community has recently seen exciting developments in training of larger and larger models. OpenAI released GPT-3, a model with 175 billion parameters, to much fanfare, and Google Brain recently trained a model with 1 trillion parameters. Such huge models, however, require ever more computation, and are trained on huge clusters of GPUs for months at a time.
In the case of GPT-3, for instance, the electricity cost of training alone was reported to be 12 million dollars. These sorts of resources are infeasible for all but the largest companies, leaving research and development along this path out of reach for almost everyone.