I've had this discussion with one particular person and they pointed out, quite correctly, that there is a good reason why UML programming never took off. That's because the graphical development environments were always the encoding the wrong thing - the requirements. The "what" rather than the "how". The "what" parts are often contradictory, which is why I have a job. A software development professional is there to turn the "what" into the "how" correctly. Break the paradoxes and force the machine to do what is implied in the requirements, not do them literally.
So as one of the developers quoted in the article, graphical environments are good for learning. Small projects that show how things fit together. That first frustrating few experiences when the larger project is attempted and hours are spent to make two things mesh that won't work together. Then showing the text-based programming languages - the "how" behind it all. From what I've heard, any attempts at doing UML programming actually had two steps: First, design in the graphical environment and then second, tune the generated C source code to make things actually work. Very indicative of things to come.
I also appreciated what Herb Sutter had to say - that bare-metal programming and optimization will come back into vogue in the next 10 years. Waste is waste and graphical environments and elaborate abstractions are waste. And this waste can be translated into environmentally-relevant impacts. Smaller, efficient code will use less power, less space and be better all around. I still hope for the day when every chip-based interface responds instantaneously to my input. Even if it has to tell me that what I just requested will take a long time, that initial response shapes my interaction.