Recently I had a real world run-in with a horribly coded application.
Public transport in the netherlands is slowly switching to a new form of payment, namely cards with what seems like RFID chips in them. You have a balance on your card and as you scan in and out when using public transport, your balance is decreased. Fine thus far. I wanted to try it out in the Amsterdam subway and so I went to load 5€ onto my card. This is done at a machine. I followed through the menus, paid 5€ and assumed all went well. After all, I got no error message and I did get a receipt. Just to be sure though, I put my card next to the reader again and checked my balance. Zero. WTF? I was confused and pissed that it ate 5 bucks. Later in the office I went to the FAQ on the website for the new system, and I found this was a common problem. The cause? I aparently took my card away from the RFID reader too early. Huh? How can that be, it deducted money from my bank account, right? Aren't these things using basic principles of transactional processing? I mean, it's not hard to verify that the card has the money on it and only THEN take it off my account, right?
Perhaps they are worried that people will pull the card away at just the right moment between having the money loaded on and the verification check, and thus getting free money. I'm sure some sort of logic can be applied to prevent this, but even if not, then at least put some sort of a 'please wait' message on the screen until the verification can be done.
Seeing systems like this makes me cringe. How the hell is it possible that someone approved this design? I think it's time the computer science industry implemented something similar to the PEng concept that engineers have. Computers are running all sorts of complex and vital systems these days, and the only guarantee of quality we have is the hope that the people who developed the system had the brains/resources to get it right.