Why should we assume that neural networks cannot be developed to have intention and conscience? ChatGPT was taught "human behavior" using an auxiliary human-in-the-loop system called InstructGPT. Why should we assume that "intentionGPT" and "ConscienceGPT" are not similarly feasible?
I actually support Hinton's position. Tools and techniques in AI are dangerously close to replicating human performance in almost any field. While I see some merit in your position of not overestimating today's AI, we should not similarly underestimate tomorrow's AI and prepare ourselves for the societal shock that a dangerous combination of an unprecedentedly powerful technology and human greed will bring about.