In setting up a counterpoint to last the last post (The Pursuit of Risk), it seems only fitting to again begin with David Pye:
“With the workmanship of risk we may contrast the workmanship of certainty, always to be found in quantity production, and found in its pure state in full automation. In workmanship of this sort the quality of the result is exactly predetermined before a single salable thing is made…in principle the distinction between the two different kinds of workmanship is clear and turns on the question: ‘Is the result predetermined and unalterable once production begins?” (p. 20-22)
Throughout the text, Pye sometimes refers to a predetermination of process, sometimes of result, and sometimes of quality of result. I believe this apparent ambiguity can be made clearer if we recall our analysis of “risk.” Pye is speaking about work carried about by some agent, during some process. His juxtaposes risk and certainty in order to characterize the actions of that agent and the value those actions have in the creation process. With risk, Pye discusses the in-the-moment abilities and decisions of the agent, while with certainty he highlights the effort on the work done before the process ever begins. To illustrate, he uses the example of writing with a pen vs. modern printing:
“The first thing to be observed about printing, or any other representative example of the workmanship of certainty, is that it originally involves more of judgment, dexterity, and care than writing does, not less: for the type had to be carved out of metal by hand in the first instance before any could be cast; and the compositor of all people has to work carefully, and so on. But all this judgment, dexterity and care has been concentrated and stored up before the actual printing starts. Once it does start, the stored-up capital is drawn on and the newspapers come pouring out in an absolutely predetermined form with no possibility of variation between them…” (p. 21)
And so, the defining characteristic of the workmanship of certainty is the focus on predetermination (of process, result, and/or quality of result). Correctness is decided before the work begins, and the work must not deviate once it has begun. Expanding this, any deviation is deemed a mistake – a reason to adjust the process in order to better achieve the predetermined correct value. The worker’s task here is to achieve the result, not determine it, and when the workmanship of certainty is in full bloom and the process, result, and quality of result are fully determined, the worker need only follow instructions: “An operative, applying the workmanship of certainty, cannot spoil the job.” (Pye p. 22)
Following the model of the previous post, I believe this type of workmanship can be generalized into a disposition, based on this core characteristic. The pursuit of certainty is the belief that the value of whatever is being done is predetermined, external to the system, or otherwise outside the direct consideration of the agent at work.
This is not really such a far-off concept; generally speaking, I believe much of our daily routine stems from the pursuit of certainty. We begin with an idea of where we want to go and get frustrated by changes in the plan. Before we even begin, we usually have an idea for what right and wrong outcomes look like. That pattern is visible in many of the cognitive structures that support our decision-making processes: We call our decisions logical or rational or even moral because they adhere to some external notion of truth or correctness. Perhaps we sometimes have to prove why we made the decision, but such a proof is only communicable because there exists some external structure for validating its correctness.
That last one might seem like a bit of a stretch, as one might assume that moral principles (ethics) are, in fact, universal truths – that they exist in spite of us, not because of us. But that concept is debatable. No matter how universally we agree upon what behaviors are right and wrong, they are still only so because of our general agreement. One need only look at the evolution of values upheld by a society to verify that ethics is ultimately as malleable as anything else. In a democratic society, such behavior is only codified following widespread agreement. Laws that codify and enforce ethics are results of social development, even if they appear to drive it.
I do not mean to undermine the value of ethical behavior. I intend only to provoke the thought that even ethics relies upon the crucial act we perform under the pursuit of certainty: We agree to agree with each other. We externalize our beliefs into a communicable, agreed-upon standard, and then agree to hold each other to that standard. We relinquish our individual interpretations of ethics in order to achieve consistency and reliability – practical outcomes of certainty. This of course has the great social benefit of objectifying ethics. Once objectified, ethics is no longer beholden to the minor perturbations that arise from our individual beliefs. For ethics to work, we must agree upon what it means to be ethical, that it is good to be ethical, and that deviation from our agreement is unethical.
Put simply: Some truths are considered objective because we, pursuing certainty, decide they should be. The benefits of pursuing certainty are vast. For starters, they ground the scientific method. We observe the world and steadily refine our agreement as to how it usually works. As our ability to observe and model the world’s behavior increases, so do our hypotheses. From this we glean reproducibility, probability, statistics – all the important modeling strategies that empower decision-making based on prior knowledge.
Of course, I am neither an anthropologist nor a social scientist, and my ultimate interest is not ethics or the scientific method. I am instead interested in a more specific outcome of the pursuit of certainty: computational models.
In broad terms, computation is the act an agent performs in determining changes in state. Definitions for computation abound, but the important factors are an agent as the subject (something – human, animal, digital system – must perform the act), determination as the verb (responsibility for the act lies with the agent), and changes in state as the object (ultimately, the process boils down to discretized steps). An algorithm, for instance, is a step-by-step procedure an agent uses to process information. A program, similarly, is the set of instructions that tells a computer what to do in order to solve a problem. From this concept, various computational applications utilize increasing levels of abstraction in order to achieve greater complexity in behavior and results; I claim, however, that for all intents and purposes, computational models have all been modeled on the pursuit of certainty.
For now, I will outline several examples of computation utilized in modern artificial intelligence research (AI). I intend to flesh these ideas out further next semester, as part of my thesis research.
- In the early days of AI development, rule-based systems attempted to capture knowledge in the form of “if-then” statements. Having accumulated a sufficiently large base of these rules, these “expert systems” (employed widely in business and medicine in the 80’s) could demonstrate excellent decision-making abilities, within a narrow scope. While rule-based systems do not limit the number of possible outcomes, the rules (determined by an agent external to the system) limit the space of possible outcomes. As a simple analogy, consider cellular automata: no matter how many iterations a particular CA goes through, a cell will never turn red – unless a rule is added to instruct such behavior.
- Another field of research involves statistical inference, evidenced most clearly by Bayesian inference. This approach utilizes Bayes’ theorem, which brings together two intuitive concepts in decision-making: the probability of something based on current conditions (likelihood function), and the probability of something based on previous tendencies (prior probability). Using these two conditions, an agent bases its decisions for action on a statistical representation of what it has seen and done before. Proponents argue that this begins to capture how we learn from experience and are able to make quick decisions despite ambiguous circumstances.
- Recently, neural networks have made a comeback in AI research. These systems are used as classifiers – in most cases, of imagery. Typically, they are trained by being fed extremely large data sets of images; from these, the systems build their own internal representations of the images. These representations are self-learned, and the layers of the network allow the system to abstract its understanding to a level many one consider semantic. For example, having studied thousands of school buses, a neural network will be able to identify an entirely new example of a school bus – one it’s never seen before – perhaps based on some notion that a school bus is yellow with black wheels, or that its windows are arranged in a certain pattern, or that it is much longer than it is wide – or perhaps all of these characteristics. As the data set increases, so typically does the robustness of the internal representation.
As these examples begin to demonstrate, there is an underlying assumption that truth (a correct value) already exists – that it is either predetermined or external to the system. These computational implementations rely upon increasingly abstract methods of knowledge representation (rules, probability distributions, or semantic structures) to act. To use our previous description, the agent that performs the computation does not determine the result so much as attempt to achieve it.
All of these examples, I argue, are modeled on the pursuit of certainty. This is not to say that computation cannot be modeled on the pursuit of risk, but that our efforts to date have largely been in achieving reliability and consistency in computational output. We want our systems to reveal some bit of truth – not determine it. We attempt to eliminate any subjectivity in the system as much as possible. A machine that behaves outside of our expected behavior is wrong only because we have an expected behavior to start with.
In summary, I want to stress that I am not denying the use of this type of computation – I am even willing to admit that it is the most useful type of computation. I’m willing to admit that our own attempts to objectify and order the world are critical to the development of societal structures, not to mention the advancement of science. The pursuit of certainty, in assuming that value is an external goal to be achieved, enables much of the stability that enables us to progress as a species. But I am not willing to admit that the pursuit of certainty is everything. There is more to be had, and we are limiting computation by assuming that it must be a pursuit of certainty. Our development of computation as inherently certain systems represents a lack of willingness to let them be risky.