“It’s the only job I can think of where I get to be both an engineer and an artist. There’s an incredible, rigorous, technical element to it, which I like because you have to do very precise thinking. On the other hand, it has a wildly creative side where the boundaries of imagination are the only real limitation.”Andy Hertzfeld
Last time, I described what computation is. I also presented the basics of automation and finite state machines. Finally, I outlined abstraction as a form of dealing with complexity.
Today, we build on this theoretical foundation. We will begin to move our discussion more towards practical ends, however, and the programming of computers. I will again try to present only the most immediately relevant information.
With that said though, I am predominantly keeping in mind how all these articles will sequence together. My goal is something approximating global-optimality for the series, not local-optimality for each article. While I hope instalment is somewhat self-contained – which is to say that each is valuable, even if read in isolation – I am still mostly writing to ensure you get what you need from the series. The implications of this are that I am not attempting to write, “THE COMPREHENSIVE GUIDE TO LEARNING PROGRAMMING IN 2021.”
The content will ebb and flow. Topics will be introduced then reenforced. As I see it, the acquisition of knowledge takes time and consistency – and pedagogical methods should reflect that. Knowledge is not delivered with brute force, in a knockout-blow. Instead, it is more akin to the turning of sand into a pearl or a seed into a tree.
And with that, let’s dive in.
User-error or computer-error?
While the metaphor of pearls and trees makes the idea of learning to program seem very romantic, you will likely face a lot more frustration than the slow and steady Oak tree does as it develops. There will be many mistakes and setbacks along the way. For you to be ready to tackle programming, we must first address why these setbacks occur. Is it the computer, or you, who is wrong?
At this point you might think that it is obvious and, additionally, that you are humble enough to admit it. “Of course, it will be me,” you say.
When it comes down to the crunch though, it’s not that easy. Trust me.
You may have gone over a program, line-by-line, fifty-plus times. You cannot for the life of you see what is wrong and you can only see why you program should be working. Your only conclusion: there is something wrong, somewhere, in the computer.
This is the easiest of all excuses when you’re learning to program because you know so little about computers. They are the conceptual black box. You have little clue about their workings and, as such, you can blame any-and-all faults on them.
This is the wrong move.
This can be a very hard lesson to learn, though, so don’t underestimate it. Many give up their programming endeavours because they get stuck on issues like this. Others persist, however, telling themselves that if something isn’t working, there is something within their control to fix it. They do not dismiss the project by laying blame at the computer’s door.
The reward of this persistence? A thorough kicking of oneself when you do eventually find the issue — hours, days or even weeks later. “It was there all along,” we sigh.
This is both the joy, and tragedy, of programming. It is one of the bluntest forms of feedback one could receive. A program runs or it doesn’t. Once it runs, it outputs what it should have, or it doesn’t. And be warned, there are many, many more occasions of “it doesn’t” than “it does.”
This is the beauty of it too, though.
Programming forces you into epistemological clean-up mode. Anytime you feel tempted to reach for the it-should-be-working card, you need to acknowledge that your model of the situation is wrong. There is a key piece of the puzzle that you don’t understand; or even, more simply, there is a misspelt word or incorrectly typed variable. I know that many times I have been stuck because I made a variable an integer (whole number), when it should have been a floating-point (containing decimal places).
These errors – or “bugs” – can be overwhelmingly frustrating. If you are to overcome this frustration, however, you must keep in mind that learning to fix bugs is what improves your understanding. That’s what programming is about.
Two quotes from Seymour Papert’s seminal book Mindstorms: Children, Computers and Powerful Ideas drum this point home for me:
“When you learn to program a computer you almost never get it right the first time. Learning to be a master programmer is learning to become highly skilled at isolating and correcting ‘bugs,’ the parts that keep the program from working. The question to ask about the program is not whether it is right or wrong, but if it is fixable. If this way of looking at intellectual products were generalised to how the larger culture thinks about knowledge and its acquisition, we might all be less intimidated about our fears of ‘being wrong.’”
“The process of debugging is a normal part of the process of understanding a program. The programmer is encouraged to study the bug rather than forget the error.”
I think both are phenomenal.
The first, among other things, speaks to the normalisation of errors. The second articulates how one’s own understanding is progressed; by leaning into – not away from – those errors.
Why all this talk about errors?
I will tell you, but let me back up a bit first.
One of the eye-opening — but unsurprising, in hindsight — things that learning about computers did for me, was take the “magic” aspect away. I would push keys or click buttons and the thing I wanted to happen, did – often, but not always. When it worked, as I said, it was magical; I had no explanation for it. When it didn’t, well, I still didn’t have any explanation for it. Except I considered the computer to be broken in these instances.
However, now — still without understanding all of what goes on — the magical aspect has been taken away. The once opaque black box has become partially transparent, and I can now see some of the inner workings. A computer, just like a biological organism, is just a bunch of systems and sub-systems.
It sounds so blatantly obvious to lack any semblance of profundity, but a computer will only do what you tell it — in relation to the state of all its subsystems (does it have enough memory? Is the keyboard plugged in? Is the required app open?). We like to convince ourselves that there is something non-deterministic going on during times of error — random fluctuations in the computer’s mood, or something – but this is just to excuse our own lack of competence at the time.
(To be clear, I’m not talking about genuine malfunctions here. I’m talking about operator-error. A far more common form of problem.)
For instance, we’ve all had those moments where we’ve done something, like click the mouse, and the computer hasn’t done what we thought it would. The result is usually some internal — and occasionally, external – screaming of, “THAT’S NOT WHAT I MEANT!”
I now realise, however, that it could really be no other way. This is something you will have to come to grips with, too. The computer only does what it is told. That is why we are talking about errors. As a human, you are prone to making them – and the computer will follow suit, no questions asked.
In fact, this unwaveringly mechanical nature of computers is so integral – but counterintuitive — for budding programmers and computer scientists to learn, that it seems to be a rite of passage to have the knowledge bestowed upon you. I say counterintuitive because we often put computers on a pedestal for all the amazing complex tasks they achieve. They appear smart or intelligent to us.
However, one the canonical first things you get taught when learning about computers and software — other than how to print “Hello, World!” to the screen — is that computers are dumb. It sounds dramatic, but what is meant by this is that computers have no inferential ability.
Computers have no capacity to notice that even though I (the user) am clicking a specific button, what I really want to be doing is clicking a different button and, as such, executing a different command. The computer can’t do this — it isn’t a mind-reading device — it just executes what it is told. If you or I click the wrong button, that’s on us.
Additionally, this only gets truer as we move from clicking buttons to writing actual programs – where the freedom of action and, thus, error is so much more substantial.
Up until this point, the vast majority, if not all, all of your computer utilisation will have been interacting with the graphical user interface (GUI) – directing the computer through the mouse and icons. You won’t have tried to “speak computer” in any real sense, where you type commands to it in a language that both you and it can comprehend. Furthermore, you will have also only been a user of applications, not an author.
While interacting with the computer in these ways doesn’t magically give it inferential abilities, they do still make it more intuitive and user-friendly. Hence, why things are the way they are. However, as we begin to peel back the hood and remove the guard-rails – to mix metaphors — you will notice that there is a much vaster array of ways-things-can-go-wrong. You will also be left with much more cryptic error-messages when things inevitably do. This is all part of the journey.
You can’t acquire the touted benefits that learning to program supposedly has – such as improved problem-solving skills – if you’re not forced to solve more cryptic, abstract, or complex puzzles than you otherwise would have. If you can manage to see this as a virtue — rather than a vice — of programming, you will likely go a long way.
Again, let me reiterate, it will be challenging. But is important to keep in mind that the errors you face will be solvable. To reach the point of solution, however, you will need to resist the reflex of “something is wrong” or “the computer is broken” if you’re going to fix anything. You put the error there.
Now, ask yourself: How can I take it away?
Dumbness as a feature, not a bug
As I said earlier, computers are dumb.
This, however, is a good thing — provided we steer them correctly. The fact that computers have no inferential ability makes them reliable and predictable — to a much greater extent than any other human has ever been before.
With the age of self-driving cars (appearing to be) on the horizon, it is common to hear phrases such as, “I wouldn’t dare trust a computer to drive me around. No way I could rely on it to the right thing when it came down to it!”
And, at the time of writing, you could be considered wise for holding that opinion; but likely not forever.
When the current iteration of self-driving cars doesn’t do the “right” thing, that isn’t a knock on self-driving cars – like technophobes think it is — but an indication that we, both as humans and programmers, don’t know what the “right” thing to do is.
We can’t define it.
Again, computers will only do as instructed. For the most part, though – and there are important and egalitarian caveats to this – society will be, and is being, improved by increased computer-based automation. You might not “trust” computers to drive you around yet, but you already do trust them to (mostly) pilot the aeroplanes you fly in, to shift money around that pays your bills, as well as guide the cargo-ship and delivery-driver that gets the crap you purchased from Amazon to your door.
Now, I don’t mean to beat you over the head with this rant, suggesting you’re a luddite until you transform into a techno-optimist. The point of this is, simply, computers are reliable.
Fortunately, or unfortunately, this is going to highlight to you just how unreliable we humans are. And, on one of those occasions when you are frustrated because you can’t find a bug and are just thrashing around in solution-space looking for a hit, keep in mind just how reliable computers are. Not as a sadistic meditation on your own inadequacies, but instead, why learning how to guide, steer and control them is such a powerful skill. If you solve a programming problem, it will reliably stay solved.
However, the primary benefit of learning to program is not so that problems stay solved, but to better see our own thinking for what it is. Programming casts an intensely bright light on just how precisely we understand something.
This is because, as we have discussed, programming forces you to confront, and overcome, your errors and setbacks. Programming is the trade of development and debugging. You build something, then fix it. You build a little more, then fix the next round of issues. This iterative process is an epistemological solvent. You are constantly forced to face the faults and folly of your own thinking.
But what is a programming, really?
Programming is the description of computational processes — that was why we talked about computation first and foremost. A program can be thought of as a sequence of instructions that tell the computer how to perform a specific computation. Specifically, programming is the art-form of doing this in a language that can be understood by both humans and computers. In this sense, programming is akin to writing the physical laws of the computational universe that you are creating. Things will then behave in the ways that your rules of nature condemn them to.
Some computations performed are mathematical. They might involve calculating the area of a shape, counting how many prime integers there are between 206 and 1047, or determining the price of all the books in my online shopping cart. On the other hand, computations can be alphabetical – such as manipulating this very text that I am typing now – or visual/graphical – like displaying the Netflix shows you might be tempted to open in your browser, if this article begins to bore you.
All these things can be achieved by computers if they have the right set of instructions. To understand what this means at a deeper level, let us now look at types of knowledge. This may be new to some, and old to others, but it is worth reiterating regardless.
Philosophers categorise knowledge in a variety of ways. One widely recognised way, however, is conceptualising knowledge as declarative or imperative.
Declarative knowledge relates to knowing-that. Imperative knowledge is knowing-how. Declarative knowledge is about facts. Imperative knowledge is about processes. Computation and programming are imperative forms of knowledge.
This distinction might seem subtle, or irrelevant, but it is important for understanding computers – and, as such, how to program them. The implication here is that a computer does not have all the information it provides to you, at various times, stored away in some unlimited source of memory.
A computer doesn’t “know” what the square root of 169 is, or the statistics of your character in some role-playing game. Computers can provide these for us by having a set of instructions for determining how to answer the questions asked of them. This can be by launching an application, running a protocol to run or executing an algorithm. These things may require retrieving some pieces of information from a variety of databases, which we may think of as declarative knowledge, but this process too must be described: Which data base and where is it found? What pieces of information? How shall the retrieved data be filtered or sorted?
Learning to program is learning to think systematically about how correct answers can be found to certain problems.
Why should you care?
Ultimately, I want this series to help induce computational thinking – which I will elaborate on more so next time.
For now, let me leave you with this quote on the topic of thinking like a programmer. It is from Think Python by Allen B. Downey:
“This way of thinking combines some of the best features of mathematics, engineering, and natural science. Like mathematicians, computer scientists use formal languages to denote ideas (specifically computations). Like engineers, they design things, assembling components into systems and evaluating trade-offs among alternatives. Like scientists, they observe the behavior of complex systems, form hypotheses, and test predictions.”
This is quite the smorgasbord of mental stances – each with their own strengths and weaknesses. Developing the ability to adopt each of these, at different times, is one of the virtues of learning to program. You cannot think only like the scientist, engineer, or mathematician. You must channel elements of each, at different times, depending on the specific problem at hand.
It is an exciting endeavour, and we will continue our exploration of it again soon enough.
Thanks for reading.