Preamble: This started out as a short blog post following an interesting conversation in (where else but) the kitchen, and turned into a full-blown conceptual journey through parts of my ways-of-thinking brain 🤷. Things happen. Also, ten! woohoo!
I’ve been reading a lot about computational thinking (CT) recently. This comes on the one hand from CT being a chapter in the ways of thinking in informatics course, on the other hand from questions that derive from the recent discussions about ChatGPT.
Computational Thinking
In a teacup, computational thinking is a recently form of thinking, or rather problem solving, where you approach questions knowing that you can use computers to answer them. A famous example for this is shotgun sequencing, a method to sequence DNA that uses a pattern matching algorithm to reconstruct a whole sequenced DNA from loads of little pieces of sequenced DNA; while this would be a hopeless endeavor by hand, it makes sense if you know you can use a computer to do it. Note that CT is not the same as algorithmic thinking, which is the ability to come up with the sequence of steps to solve a problem.
The term computerational thinking has been around for some time (like the 1980s or so), but it has received a newfound interest after Jeannette Wing published an article, Computational Thinking , in one of the most-cited Journals of Informatics, Communications of the ACM. Since then, it has become a center part of a newfound self-image of informatics. In her work, Wing first described abstraction and automation as the two main elements of computational thinking. Abstraction of course is one of the core competencies needed to be able to solve problems using a computer: only when you find the computational core of a problem through abstraction from reality, you can actually use a computer to solve it. Usually, abstraction leads to mathematical descriptions of problemsm. Automation then comprises the upscaling of mathematics, even very complicated math, so that a solution that would be impossible to calculate by hand can be found.
Iteration
Later on, these two »steps« of computational thinking were complemented with a third concept, execution and evaluation (analyses), to form an iterative loop: abstraction –› automation –› analyses –› start from top. This of course reflects an ongoing trend in informatics to regard iteration as something other than trial and error, as many modern software engineering methods have iteration at their core. However, we find iteration here in in a very »domesticated« form: iteration is a »circle of phases« that you have to run through.
This works well as long as you believe that you can actually seperate these phases, and work in these phases is free from interaction with other phases. It is based on the dogma that work in each phase is self-contained, and that you will never learn something from doing that work that goes beyond the scope of the phase. If during implementation you find that something is wrong in the requirements, you are either out of luck, or in a crisis.
Even the Standford d-school flavor of design thinking tames iteration to fit into an orderly process that restricts iteration to make it manageable.
I consider the Stanford d-school interpretation of design thinking as rather limited. I disagree with the processualization of design thinking, because I see design thinking as a mentality rather than as a process.
Design Thinking vs. Design Thinking
I have been following a much more relaxed understanding of the process that was coined by Bryan Lawson in his seminal book How Designers Think from 1980: It is clear from our analysis of the nature of design problems that the designer must inevitably expend considerable energy in identifying the problems. It is central to modern thinking about design that problems and solutions are seen as emerging together, rather than one following logically upon the other…. [B]oth problem and solution become clearer as the process goes on.
If you believe in the type of models represented by the illustrations above, this kind of thinking does not fit into your world view, and this is why I call design thinking a mindset: you need to understand that designing something inevitably also transforms the problem, and act accordingly., and as much as software engineering models hate it, this requires much more trial-and-error than the domesticated version of iteration allows for.
A pyramid of giants
The shoulders Lawson is standing on is the description of wicked problems (1), brought to us by Horst Rittel and Melvin Webber in Dilemmas in a theory of planning in 1973: […]that you cannot understand the problem without having a concept of the solution in mind; and that you cannot gather information meaningfully unless you have understood the problem but that you cannot understand the problem without information about. This class of problem requires problem solvers to simultaneously define and solve the problem, which requires a whole lot of the trial-and-error-flavor of iteration.
So we find that, at least in design theory, iteration is something very different from jumping back to a predetermined phase in a process of successive phases. Iteration is a mindset that allows you to refine the problem you are working on and solving it at the same time. With each step we take in this open process, we learn more about the problem, and at the same time about what solutions are possible. We do this until our understanding of the problem and our design at some point converge towards a state that we are ready to call »a solution«.
Rationality v. design
This is reflected in Henrik Gedenryds critique of the presumed rationality of the problem solving process: first understand the problem, then solve the problem, finally carry out the solution. Gedenryd dedicated a whole chapter of his dissertation How Designers Work – the title being an obvious reference to Lawsons How Designers Think – to deconstructing this rational problem solving model. He beautifully show how the desire of humans to see themselves as rational beings creates this misunderstanding of how we solve problems and, ultimately, of cognition.
Take for example the myth of the so-called Fenyman-algorithm: write down problem, think real hard, write down solution. The wording we all are familiar with comes from Murray Gell-Man, a colleague of Fenyman, who said this tongue-in-cheak during an interview for the New York Times, expressing his admiration for the nobel price winning physicist.
But Feynman himself was rather dismissive of this characterization of this insinuated problem solving process, and described it rather different: You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, ›How did he do it? He must be a genius!‹.
In other words, what Feynman does – even for the kind of hard-but-tame problems he is famous for solving – is a highly elaborate version of trial-and-error! Unfortunately, the mystical Feynman wins against the real Feynman in practice, as witnessed by statements like this: In fact, I try to instil this problem-solving ability in my students when I teach introductory programming, as they all rush head-first into writing code before actually thinking about the problem they are trying to solve.
back to computational thinking
What I am trying to argue is that underlying the dominant models of computational thinking is still a sequential model of problem solving, and that while this model is very popular and academically highly attractive – rationality! - it is in fact inept.
Which finally brings me to the point I wanted to make when I started writing: the trial-and-error-type of iteration is a central component of computational thinking. The possibility to quickly change code and run it again is one of the core mechanisms enabling the problem-setting-problem-solving process in computational thinking. If we run with this this for the moment – and ignore the introductory programming teacher from the previous paragraph – we can try to see where we get when we use some design theory ideas on computational thinking.
Iteration in computational thinking
One of the most important principles for the kind of iterative thinking I’ve been writing about here are reflection-in-action and reflection-on-action as described by Donald Schön in his 1983 book The Reflective Practitioner. In as few words as possible, reflection-in-action is when you reflect on what you are doing right now, and reflection-on-action is when you back and reflect on what you did after you finished it, or before you do it. Both modes of reflection are essential to learning, and both unlock different insights, different things to learn.
If we take reflection-in-action and reflection-on-action to programming, we can see how rushing head-first into writing code (see above) might be an approach where we learn a lot as long as we do it in the proper mindset. This shifts coding from being a work that executes an already finished blueprint to an activity full of discovery and learning. This might not be the best way to take your first steps into programming, but it could be a good supplement of a more curios and exploratory nature. For this kind of exploration, John Dewey (not Melvil Dewey of decimal system fame and sexual harassment disgrace) coined the term »doing for the sake of knowing«, which is a beautiful phrase to describe how we can, if we do it the right way, learn a lot from practice.
And computer code affords this kind of practice brilliantly; the infinite patience of the computer affords us with the possibility to change code as often as we want, as little or as much as necessary, and run it again and again, to learn from these changes and, as was said before, converge problem and solution into a satisfying state.
Why ChatGPT again?
And this is where I finally get back to ChatGPT and the whole family of generative ML systems, because this is how we use them: we start with a prompt that we suspect will give us a good result, and then iteratively refine that prompt until we get to a point where the result is what we by now think is perfect, or at least good enough. Especially with image generating ML systems, the result is often substantially different from what I imagined in the first place, but (most of the time) not in a deficient way; the process changed what I wanted to the point where it meets the result.
Finally coming to an end, I want to assert that the trial-and-error-variant of iteration is a core mechanism of computational thinking, despite the persistent denial of engineers about the nature of their work.
Afterword: I’m not sure I made a convincing case for my argument. Please feel free to challenge it in the comments or on mastodon.
Footnotes
1: Note that not every problem is a wicked problem. Solving a quadratic equation is a tame problem. Proving the four color theorem is a very hard, but tame, problem. Wickedness in problems comes when you involve people, which, incidentially, most problems in informatics are.
2: Of course, iteration can mean two things when it comes to computation: the algorithmic component of iterating some piece of code over and over again, until some condition is satisfied; and the iteration I have babbled on for too long already; you can probably guess which one i mean…
Image: stable diffusion with the prompt the parallels between design thinking and computational thinking are iteration and iteration. colorful, insane detail, design sketch