One thing I noticed with a (non-cloud) client was their infrastructure choices had evolved from getting a static allocation from RIPE, so it was very apparent where their physical presence was, and you could see their web resources all sitting in the same block.
With very limited examination became clear they had little redundancy and minimal bandwidth, the risk of accidental DDoS was pretty high, and it would take out ALL of their services and their own outbound bandwidth.
I think sometimes worrying about cloud risks, I've been able to set aside the risks of not adopting such practices, even though they were making plausible use of virtualisation internal to their network. They were looking at using cloud based Office services, which would make the network dependency even worse, although they were budgeting for some more redundancy in that project.
I've seen DDoS, accidental DDoS due to web spammers overestimating the infrastructure my end (on one occasion the botnet of account creating bots bigger than we'd planned for, and all arriving in the same second), and accidental DDoS due to mistakes in app design or mishandling error conditions, and the last of these is far more frequent.
I've also seen large amount of bandwidth being wasted, but because it wasn't the limiting factor at the time no one noticed, so there was less spare when trouble starts.
Guess I'm saying adding bandwidth monitoring, and redundancy, to network links should be part of that planning ahead, and the trouble isn't necessarily malicious DDoS, but these types of controls help address multiple potential issues that degrade network performance.