Scaling Laws For Neural Language Models As Zero Shot