I’ve been preparing on my exam on programming and modeling languages. The usual classification of imperative, functional, oop and logic programming is kind of inconsequent, since there are e.g. functional oop languages.

To me, oop is more of a software engineering thing, usually applied to imperative programming with a load of syntactic sugar.

The usual OOP examples suck. They’re so basic that all the OOP and modeling stuff is just a complete waste of time. So I was considering more complex examples, and ended up at common design patterns.

A couple of design patterns in OOP seem like workarounds to me; a new language (well, maybe C# has addressed some of these, I don’t know) could probably add some more syntactic sugar to ease their use.

I hate UML diagrams. To me they just emphasize all the obvious stuff, while they quickly get too complex to be of use for “seeing” the important things.

Even UML activity diagrams in my opinion don’t properly depict the flow of information/data in the program, which is what I really care about. Activity diagrams basically show you a sequence of methods are being called. But often you’ll have a method being called thousands of times, with the actual relevant information being in the parameters passed. And some stuff might be done asynchronously, too.

For certain problems (well, not for stuff like e.g. the train door used in OOP examples), it could be much more useful to abstract away from the involved classes - which are often just wrappers for data, “records”, “datagrams”, tcp packets - and instead model flows of data. Think of drawing conveyor belts transporting data, filtering it, duplicating it, …

The gstreamer framework seems to employ such a model for audio and video processing. You build pipelines by placing e.g. sources and sinks there. But you can interpret dozens of code examples with this model. SAX transform a stream of charaters into a stream of XML nodes. UI mainloops are basically a stream of UI events. Network data obviously is easy to model as a data flow; the “obvious” mapping into “packet objects” however often useless. Even when doing a select on a database, the result has an obvious representation as a data flow (a flow of records in this case).

You probably only need a couple of “primitives”. Sources and sinks, obviously. Filters, Y and Tee modules. Queues and Caches could be interesting, too. Note that data flows can work in two ways: push and pull. Maybe sometimes the same code could be used for pull and push operations, too. When push and pull driven flows meet you often need threads to connect. For example, a SAX parser usually will pull character data from a stream and push a stream of XML nodes as output. One way to get a pull-able output stream out of this (I’m ignoring e.g. the existing XMLpull API for this example) is to use a cache (which will eventuall contain the whole document as Nodes), another is to use two threads and just block the SAX thread until the other pulling thread has requested another node.

Of course you can do all this in existing languages (you know, this doesn’t give you anything new beyond turing completeness) - just like you could do OOP in assembler. You can, but it would be nice if the compilers offer you an easier syntax. Also it would be nice if I wouldn’t have to write different code for pull-driven and push-driven operation. Also note that push type operation is rather typical for imperative programming languages - for i in range(0,10): print i - whereas pull type operations are typical for “lazy” functional languages - take 3 (repeat 'a').