Running Llama 2 On Cpu Inference Locally