The Programming Mindset (IP&CS #2)


Today, we will explore the programming mindset.

Last time, I described what computation is. I also presented the basics of automation and finite state machines. Finally, I outlined abstraction as a form of dealing with complexity. This post, however, will focus on the frames of mind that will help you effectively grapple with these notions.

Cultivating a programmer’s mindset seems well and truly worthwhile to me. A skill worth having. Irrespective of whether you end up writing code for a living.


“It’s the only job I can think of where I get to be both an engineer and an artist. There’s an incredible, rigorous, technical element to it, which I like because you have to do very precise thinking. On the other hand, it has a wildly creative side where the boundaries of imagination are the only real limitation.”

Andy Hertzfeld


First, a quick note on the content of this series as a whole: it will ebb and flow.

As I see it, the acquisition of knowledge takes time and consistency. Pedagogical methods should reflect that. Knowledge cannot be delivered with brute force. Instead, it is more akin to the turning of sand into a pearl or a seed into a tree. I just wanted to make you aware of my approach from this early vantage-point.

And with that, let’s dive in.


User-error or computer-error?

While the metaphor of pearls and trees makes the idea of learning to program seem very romantic, you will likely face a lot more frustration than the slow and steady Oak tree does as it develops. There will be many mistakes and setbacks along the way. For you to be ready to tackle programming, we must first address why these setbacks occur. Is it the computer, or you, who is wrong?

At this point you might think that it is obvious and, additionally, that you are humble enough to admit it. “Of course, it will be me,” you say.

When it comes down to the crunch though, it’s not that easy. Trust me.

You may have gone over a program, line-by-line, fifty-plus times. You cannot for the life of you see what is wrong and you can only see why you program should be working. Your only conclusion: there is something wrong, somewhere, in the computer.

This is the easiest of all excuses when you’re learning to program because you know so little about computers. They are the conceptual black box. You have little clue about their workings and, as such, you can blame any-and-all faults on them.

This is the wrong move.

This can be a very hard lesson to learn, though, so don’t underestimate it. Many give up their programming endeavours because they get stuck on issues like this. Others persist, however, telling themselves that if something isn’t working, there is something within their control to fix it. They do not dismiss the project by laying blame at the computer’s door.

The reward of this persistence? A thorough kicking of oneself when you do eventually find the issue — hours, days or even weeks later. “It was there all along,” we sigh.  

This is both the joy, and tragedy, of programming. It is one of the bluntest forms of feedback one could receive. A program runs or it doesn’t. Once it runs, it outputs what it should have, or it doesn’t. And, be warned, there are many, many more occasions of “it doesn’t” than “it does.”

This is the beauty of it too, though.

Programming forces you into epistemological clean-up mode. Anytime you feel tempted to reach for the it-should-be-working card, you need to acknowledge that your model of the situation is wrong. There is a key piece of the puzzle that you don’t understand; or even, more simply, there is a misspelt word or incorrectly typed variable. I know that many times I have been stuck because I made a variable an integer (whole number), when it should have been a floating-point (containing decimal places).

These errors – or “bugs” – can be overwhelmingly frustrating. If you are to overcome this frustration, however, you must keep in mind that learning to fix bugs is what improves your understanding. That’s what programming is about.

The following two quotes from Seymour Papert’s seminal book Mindstorms: Children, Computers and Powerful Ideas drum this point home for me.


When you learn to program a computer you almost never get it right the first time. Learning to be a master programmer is learning to become highly skilled at isolating and correcting ‘bugs,’ the parts that keep the program from working. The question to ask about the program is not whether it is right or wrong, but if it is fixable. If this way of looking at intellectual products were generalised to how the larger culture thinks about knowledge and its acquisition, we might all be less intimidated about our fears of ‘being wrong.’



“The process of debugging is a normal part of the process of understanding a program. The programmer is encouraged to study the bug rather than forget the error.”



I think both are phenomenal.

The first, among other things, speaks to the normalisation of errors. The second articulates how one’s own understanding is advanced: by leaning into, not away from, those errors.


Why all this talk about errors?

I will tell you, but let me back up a bit first.

One of the eye-opening — but unsurprising, in hindsight — things that learning about computers did for me, was take the “magic” aspect away. I would push keys or click buttons and the thing I wanted to happen, did – often, but not always. When it worked, as I said, it was magical; I had no explanation for it. When it didn’t, well, I still didn’t have any explanation for it. Except I considered the computer to be broken in these instances.

However, now — still without understanding all of what goes on — the magical aspect has been taken away. The once opaque black box has become partially transparent, and I can now see some of the inner workings. A computer, just like a biological organism, is just a bunch of systems and sub-systems.

It sounds so blatantly obvious to lack any semblance of profundity, but a computer will only do what you tell it — in relation to the state of all its subsystems (does it have enough memory? Is the keyboard plugged in? Is the required app open?). We like to convince ourselves that there is something non-deterministic going on during times of error — random fluctuations in the computer’s mood, or something – but this is just to excuse our own lack of competence at the time.

(To be clear, I’m not talking about genuine malfunctions here. I’m talking about operator-error. A far more common form of problem.)

For instance, we’ve all had those moments where we’ve done something, like click the mouse, and the computer hasn’t done what we thought it would. The result is usually some internal — and occasionally, external – screaming of, “THAT’S NOT WHAT I MEANT!”

I now realise, however, that it could really be no other way. This is something you will have to come to grips with, too. The computer does what it is told. And only that. That is why we are talking about errors. As a human, you are prone to making them – and the computer will follow suit, no questions asked.

In fact, this unwaveringly mechanical nature of computers is so integral – but counterintuitive — for budding programmers and computer scientists to learn, that it seems to be a rite of passage to have the knowledge bestowed upon you. I say counterintuitive because we often put computers on a pedestal for all the amazing complex tasks they achieve. They appear smart or intelligent to us.

However, one the canonical first things you get taught when learning about computers, is that computers are dumb. It sounds dramatic, but what is meant by this is that they have no inferential ability.

Computers have no capacity to notice that even though I (the user) am clicking a specific button, what I really want to be doing is clicking a different button and, as such, executing a different command. The computer can’t do this. It isn’t a mind-reading device. It just executes what it is told to do. If you, or I, click the wrong button, that’s on us.

Additionally, this only gets truer as we move from clicking buttons to writing actual programs – where the freedom of action and, thus, error is so much more substantial.

Up until this point, the vast majority, if not all, all of your computer utilisation will have been interacting with the graphical user interface (GUI) – directing the computer through the mouse and icons. You won’t have tried to “speak computer” in any real sense, where you type commands to it in a language that both you and it can comprehend. Furthermore, you will have also only been a user of applications, not an author.

While interacting with the computer in these ways doesn’t magically give it inferential abilities, they do still make it more intuitive and user-friendly. Hence, why things are the way they are. However, as we begin to peel back the hood and remove the guard-rails – to mix metaphors — you will notice that there is a much vaster array of ways-things-can-go-wrong. You will also be left with much more cryptic error-messages when things inevitably do. This is all part of the journey.

You can’t acquire the benefits that learning to program comes with if you avoid the struggles. You won’t improve your problem-solving skills unless you’re forced to solve more cryptic, abstract, or complex puzzles than you otherwise would have. If you can manage to see this as a virtue — rather than a vice — of programming, you will likely go far.

Again, let me reiterate, it will be challenging. But is important to keep in mind that the errors you face will be solvable. To reach the point of solution, however, you will need to resist the reflex of “something is wrong” or “the computer is broken.”

It was you who put the error there.

Now, ask yourself: How can I take it away?


Dumbness as a feature, not a bug

As I said earlier, computers are dumb.

This, however, is a good thing — provided we steer them correctly. The fact that computers have no inferential ability makes them reliable and predictable — to a much greater extent than any other human has ever been before.

With the age of self-driving cars (appearing to be) on the horizon, it is common to hear phrases such as, “I wouldn’t dare trust a computer to drive me around. No way I could rely on it to the right thing when it came down to it!”

And, at the time of writing, you could be considered wise for holding that opinion; but likely not forever.

When the current iteration of self-driving cars doesn’t do the “right” thing, that isn’t a knock on self-driving cars – like technophobes think it is — but an indication that we, both as humans and programmers, don’t know what the “right” thing to do is.

We can’t define it.

Again, computers will only do as instructed. For the most part, though – and there are important and egalitarian caveats to this – society will be, and is being, improved by increased computer-based automation. You might not “trust” computers to drive you around yet, but you already do trust them to (mostly) pilot the aeroplanes you fly in, to shift money around that pays your bills, as well as guide the cargo-ship and delivery-driver that gets the crap you purchased from Amazon to your door.

Now, I don’t mean to beat you over the head with this rant, suggesting you’re a luddite until you transform into a techno-optimist. The point of this is, simply, computers are reliable.

Fortunately, or unfortunately, this is going to highlight to you just how unreliable we humans are. And, on one of those occasions when you are frustrated because you can’t find a bug and are just thrashing around in solution-space looking for a hit, keep in mind just how reliable computers are. Not as a sadistic meditation on your own inadequacies, but instead, why learning how to guide, steer and control them is such a powerful skill. If you solve a programming problem, it will reliably stay solved.

However, the primary benefit of learning to program is not so that problems stay solved, but to better see our own thinking for what it is. Programming casts an intensely bright light on just how precisely we understand something.

This is because, as we have discussed, programming forces you to confront, and overcome, your errors and setbacks. Programming is the trade of development and debugging. You build something, then fix it. You build a little more, then fix the next round of issues. This iterative process is an epistemological solvent. You are constantly forced to face the faults and folly of your own thinking.


But what is a programming, really?

Programming is the description of computational processes. That is why we talked about computation first and foremost.

A program is a sequence of instructions that tell the computer how to perform a specific computation. Specifically, programming is the art-form of implementing these computations in a language that can be understood by both humans and computers. In this sense, programming is akin to writing the physical laws of the computational universe that you are creating. Things will then behave in the ways that your rules of nature condemn them to.

Some computations performed are mathematical. They might involve calculating the area of a shape, counting how many prime integers there are between 206 and 1047, or determining the price of all the books in my online shopping cart. On the other hand, computations can be alphabetical – such as manipulating this very text that I am typing now. Or visual/graphical – like displaying the Netflix shows you might be tempted to open in your browser, if this article begins to bore you.

All these things can be achieved by computers if they have the right set of instructions. To understand what this means at a deeper level, let us now look at types of knowledge. This may be new to some, and old to others, but it is worth reiterating regardless.

Philosophers categorise knowledge in a variety of ways. One widely recognised way, however, is conceptualising knowledge as declarative or imperative.

Declarative knowledge relates to knowing-that. Imperative knowledge is knowing-how. Declarative knowledge is about facts. Imperative knowledge is about processes. Computation and programming are imperative forms of knowledge.

This distinction might seem subtle, or irrelevant, but it is important for understanding computers – and, as such, how to program them. The implication here is that a computer does not have all the information it provides to you, at various times, stored away in some unlimited source of memory.

A computer doesn’t “know” what the square root of 169 is, or the statistics of your character in some role-playing game. Computers can provide these for us by having a set of instructions for determining how to answer the questions asked of them. This can be by launching an application, running a protocol to run or executing an algorithm. These things may require retrieving some pieces of information from a variety of databases, which we may think of as declarative knowledge, but this process too must be described: Which data base and where is it found? What pieces of information? How shall the retrieved data be filtered or sorted?

Learning to program is learning to think systematically about how you can derive correct answers for particular problems.


Why should you care?

Ultimately, I want this series to help induce computational thinking – which I will elaborate on more so next time.

For now, let me leave you with this quote on the topic of thinking like a programmer. It is from Think Python by Allen B. Downey:

This way of thinking combines some of the best features of mathematics, engineering, and natural science. Like mathematicians, computer scientists use formal languages to denote ideas (specifically computations). Like engineers, they design things, assembling components into systems and evaluating trade-offs among alternatives. Like scientists, they observe the behavior of complex systems, form hypotheses, and test predictions.”

This is quite the smorgasbord of mental stances – each with their own strengths and weaknesses. Developing the ability to adopt each of these, at different times, is one of the virtues of learning to program. You cannot think only like the scientist, engineer, or mathematician. You must channel elements of each, at different times, depending on the specific problem at hand.

That is the programming mindset.

Learning how to implement it is an exciting endeavour. We will continue our exploration of it again soon enough.

Thanks for reading.

I am fascinated by the power of knowledge; in particular, how through its implementation we can build a better life for ourselves and others. Most specifically, I am interested in ideas related to rationality and morality. I believe we can all be benefited by having a concern for both probability as well as people. As a student, I am studying Artificial Intelligence. As a professional, I work in mental health case management. When I am not doing one of these things, I am very likely writing for my blog, recording an episode for the "PhilosophyAu" podcast, hanging out with my nan, reading a book or, occasionally, attending a rave. A previous version of myself obtained a bachelors and a masters degree in sport science and was the Manager of Educational Services for a leading health and fitness company.

Related Posts

1 Response
  1. […] Put simply, the command line is an alternate way of instructing your computer on what you want it to do — the command line uses text-based commands. This differs from the typical way we control our computers, using actions such as point, click and drag, which we implement through the mouse and GUI (briefly discussed last time). […]

Leave a Reply

%d bloggers like this: