Large language models (LLMs) as coding agents pose severe security risks, including prompt injections, hidden malicious instructions, and hallucinated insecure code, enabling data breaches and system compromises. Emerging defenses like privilege controls offer limited protection. Experts urge prioritizing robust safeguards to avert widespread cyber threats.