Feb 2, 2023
No need to hit the panic button as yet. Large Language Models (LLM's) are confident bullshit generators. You should only worry about them replacing you if you're working mostly on low-value code (i.e., simple problems). They sometimes fluke their way into higher value code, but I wouldn't touch it with a barge pole. Will they get better? Yes, for sure. But guess what ... this only means you'll stop wasting your time doing low-value code and focus on solving harder problems. Until ( if ever) we hit the singularity, you really don't need to worry about AI at the level of LLMs.