“We’re presently in the midst of a third intellectual revolution. The first came with Newton: the planets obey physical laws. The second came with Darwin: biology obeys genetic laws. In today’s third revolution, we’re coming to realize that even minds and societies emerge from interacting laws that can be regarded as computations. Everything is a computation.”Rudy Rucker
FYI: “IP&CS” is what I am shortening “Introduction to Programming & Computer Science” to. Nobody, especially me, wants to have to write that every time.
Before we begin our exploration of programming and computer science concepts, we must ask – and attempt to answer – some fundamental questions. This shall help us orient ourselves and shed some light on the trajectory of this series. I will, however, cut some corners.
Many times will I stray from the well-trodden and conventional intro-to-CS path. For example, I am going to skip the history lesson around Charles Babbage, Ada Lovelace, Alan Turing, John von Neumann and the rest of the Algorithmic All-stars. Not that they weren’t smart and interesting people – by all accounts they were dynamite.
The goal of this series, as I said in the preface, is to come to grips with computer science concepts – with programming being the primary avenue towards this destination. We can do this without the additional “side-plot knowledge” of the field. It’s not that this information is completely irrelevant, only that it isn’t sufficiently additive to make the cut here. Without a doubt, I do believe one should have some level of awareness of the primary figures and previous paradigms within a field if one wants to make a significant contribution of their own; so please don’t consider me as discounting this. My only intention is to focus on the most immediately relevant information.
In short, we will focus on the what, how, and why, rather than the who.
What are computers? What is computation?
We will address the latter first, as that will clarify the former.
You may have some intuitive grasp about what computation is already, due to how the term is used in modern parlance. To be explicit though, computation is, in essence, calculation. Which is to say that computation is the transformation of one or more inputs into one our more outputs.
Which is to say… well, pretty much nothing of substance. Starting to see where computer science gets its abstract reputation from yet?
To concretise this concept, let’s look at a few examples.
First, think about how we calculate – or compute – the amount of change we will have left after we buy a can of Pepsi with a $5 dollar note. We transform the inputs of MoneyTendered and ItemCost into an output of ChangeDue. Here we see that two inputs are provided and then transformed in one output, using the rules of a particular computational process — subtraction of the latter input from the former, in this instance.
As a more vague, though still very realistic example, consider how we calculate who (we think) should be Prime Minister, President or whatever the relevant form of Overlord is called where you’re from. When we make such a calculation, we still transform a variety of inputs into outputs; only we are working with much more fluid concepts.
Inputs for this calculation might include parameters (inputs) relating to Trustworthiness and ProposedPolicyChanges. Interesting, though, these parameters likely arise from other preceding calculations; for example, Trustworthiness may be determined by a calculating factors such as ReputationInTheMedia, KnownAssociates and NumberOfTimesFoundLyingOnPreviousOccasions.
I recognise that these are not very definitive categories, with precisely specified values, but that’s by design. I’m trying to highlight a more practical, everyday occurrence. You will often find explanations of computer science concepts to be rife with mathematical examples — which, admittedly, should be unsurprising given the closeness of the two fields. Unfortunately, though, this tends to make the content less accessible to those without such a strong mathematical background.
Regardless, all I need from you, at this stage, is to have a feel for what calculation is. The turning of one, or multiple, pieces of information into one, or multiple, outputs.
However, seeing as I did only present examples above of multiple inputs resulting in a single output (i.e., Trustworthiness and ProposedPolicyChanges equalling ProbabilityOfReceivingVote), I am going to resort to a math example as a proof of concept for a single input resulting multiple outputs. Spare me this one — and a few others — please.
An example of a single-to-many calculation would be for “Numbers which can be squared to result in a particular number.” As our input, we might select the number 4. Our outputs would then be 2 and -2, as both can be multiplied by themselves and result in the number 4.
Ok, proof of concept established. No more math for now.
I have been using the term calculation predominantly up until this point, but let’s return to the term computation once more — now that you have grasped the concept. If you ever feel yourself getting lost, try to remember that computations are just calculations (based on certain rules). Given all this, we can now see how – in some sense – computer science is the study of taking inputs and transforming them into outputs; be it one or many, in either case.
To go one step further in your CS-education journey, we have a name which describes all of this: symbol processing.
The term symbol is important because it abstracts away what type of inputs a computational process can act upon, or the kind of outputs it will produce. As we are all mostly aware by now, computers don’t just perform calculations on numbers. Computers are, in some sense, restricted to the mathematical – though, what can be represented mathematically is a vast, vast number of phenomena. As such, very “human” things, including speech or forms of visual stimuli can be processed by computers. This is why we use the phrase symbol processing to describe what computers do.
The final piece of the puzzle that I need to introduce into the equation as this point, is the process of automation. We have established what computation is, however, for something to be recognised as a computer, it needs to be mostly autonomous or automatable. This roughly means that the user doesn’t have to explicitly describe or execute each step along the way; there is some level of predetermined instructions or a series of states already imbedded into the system.
For example, the following is a very rudimentary “state diagram,” which are commonplace in theoretical computer science and used to depict finite state machines — which are abstractions used to model computational processes.
This depiction demonstrates a level (albeit small) of automation or pre-determined instructions/series of states.
To be explicit: Our simple example shows a system that starts at STATE 1 and then proceeds from there depending on what kind of input it receives. The following transitions it will undergo then depend on the input and the state it is in at the time it receives it.
For example, if our finite state machine is initially given input 0, it stays in (or returns to) STATE 1. However, if it is given 1 as an input, it transitions into STATE 2. The same process is then true of STATE 2: receiving 0 as an input keeps it in the same state, whereas receiving a 1 changes its state — moving it back to STATE 1.
Building on this, we can define all that one would need to know about a finite state machine quite easily. All we would need to do is provide its original state, its possible states, and the inputs that determine what transitions it makes.
If we let S represent state, and I represent input, such that S1 is STATE 1 and I0 is input of 0, we then get the following for the above diagram:
Initial state – S1
S1 + I0 = S1
S1 + I1 = S2
S2 + I0 = S2
S2 + I1 = S1
We can see this an example of computation. Firstly, the finite state machine combines CurrentState and InputReceived and turns that into NextState – transforming two inputs into one output.
Secondly, however, our machine is capable of a more practical application – namely, telling us whether a string of binary digits (bits) has an odd number of 1s in it. It doesn’t tell us how many 1s were in the string of bits, but we can see from its design it will always finish in STATE 1if presented with a series of bits that has 0, 2, 4, 6 … number of 1s in it. However, if it is presented with 1, 3, 5,7 … number of ones, it will finish in STATE 2.
We see a computation here that we might call TotalBinaryInput resulting in FinalState, and we can derive something of small, but non-trivial, value from it.
With all of this now said, we have laid the groundwork for what computation is and computers are. Computation is the process of transforming information and computers are automatable machines that process symbols (with numbers only being one example). Subrata Dasgupta summarised this in his book, Computer Science: A Very Short Introduction when he said:
“…the stuff of computing is symbol structures. Computing is symbol processing. Any automaton capable of processing symbol structures is a computer.”
I hope you now have some semblance of clarity over what these words mean, how they are used and why. While the examples we’ve used so far are simplistic, they are cut from the same conceptual cloth as the computers you know and use to read this very post. Modern computers are scaled up versions of the primitive design and processes that I have presented here. As such, they can perform much more complex and interesting computations.
This leads to somewhat of a problem, however. When we scale up the complexity of something, how do we continue to work with it and effectively hold its model in our mind?
That is the topic of our next section.
How to deal with complexity: hierarchical systems
Computers — like ogres and onions — have layers.
What I mean by layer here is something akin to a sub-system; something which serves a (relatively) specific purpose and exists within — and contributes to — something else that has a different functional purpose (and vice-versa with a parent-system). You could also consider these organisational formats as levels, rather than a layer, and that these levels are organised into a hierarchy.
This kind of structure is not unique to computational architecture, however; biological organisms are also organised in this way. For example, you – as a whole — have a cardiovascular system. One part of that cardiovascular system is your heart. Your heart has components of its own, such as valves — specifically the aortic, pulmonary, mitral, and tricuspid. The specific function of these valves is different from the specific function of your heart, and so on up to your cardiovascular system and you as a whole. This is not irreconcilable, however.
When we move from one level of the system to another, focusing on the important features and functions at the level, we are changing the level of abstraction.
In a colloquial computer science sense, abstraction is stated as “I don’t care how it does it; only what it does” – or something very similar to that. More precisely though, abstraction refers to the removal of nonessential details, such as the specific way something is implemented – focus is placed on what a program calculates, a function returns or procedure does, more so than how each would do it.
Adjusting your level of abstraction changes how you perceive or focus on parts and the degree to which you group things together or separate them. This is an important skill for managing the complexity of computers and programs and one you can hopefully cultivate over the course of this series.
This is an important concept, and skill that is transferable outside of just computer science, because all humans are limited by their working memory. We can each hold and manipulate 4-ish chunks of information in our mind at one point in time. Learning to move up and down levels of abstraction, however, allows you to better manage your limited working memory resources.
We could think about it like this:
We’ve all had that feeling of opening a textbook, or a sitting in the first lecture, for a new subject. It feels like you are drowning in information and your brain is being pulled every which way. You’ve just wrapped your head around what “scarce” means in this context, and same for “resource,” yet now you keep getting confronted with the phrase “allocation and distribution” and you think to yourself: What the bloody hell does that have to do with what we were just talking about?
So, you try and work out that, but now, all of sudden, you’re struggling to remember what defines a resource again — let alone what types of them there are!
Uh oh… here comes even more new terms “efficient markets,” “Keynesian” and “commoditization” … You’re trying to mentally juggle way more terms than you can handle, and you end up dropping them all. These 10+ pieces of information completely exhaust — and then some — your limited space in working memory.
An experienced economist, however, would simply have all these pieces of information grouped together and filed under Economics101. This may then only take up one of the available spots in working memory, freeing up mental space for other pieces of information or ability to perform mental operations.
This is not a perfect comparison to levels of abstraction, but there are many parallels there. As we move through this series, you will hopefully learn enough about each little piece of the puzzle, that you feel confident to ignore the implementation details of it from that point onwards, thus freeing up your own cognitive resources.
You will also see that there are many ways to achieve a single outcome. As such, the method of implementation is mostly irrelevant – depending on efficiency requirements and other constraints, of course. If one way of doing things is twice as fast and uses half as much memory, then that’s probably a better option. However, if the faster option takes 0.01 seconds and the slower takes 0.02 seconds, and there are vast amounts of memory left over in either case, the implementation can likely be ignored.
Learning when and what you can and can’t abstract will be key. But first, we must develop some general agility when it comes to moving up and down the levels of abstraction (hierarchy of organisation within the system) as this will be integral for learning to think as a computer scientist does.
Computation is calculation, which is turning inputs into outputs, based on some rules. We call this symbol processing. Computers are automatable machines which we can instruct to process symbols in the way that we want them to.
Computers require a lot of parts and operations to do this, however; yet we can manage this complexity through abstraction and by ignoring various implementation details.
As a final word, computers are not magical – they do basic things, only very, very well. For the most part, they only really do two basic things: perform calculations and remember results. Computers are impressive because they can do lots of calculations per second as well as store vast amounts of information in memory.
More about that in time to come, though.
Thanks for reading!