When "Constants" ... Aren't

Posted by Q McCallum on 2021-09-20

Want to improve your risk assessment? Identify, then question, the constants in your world.

A risk is a potential change that carries consequences. In order to get ahead of a risk, you have to develop a nose for the kinds of changes that might happen in your world. Exploring “what if?” scenarios is one tool to surface those possible changes.

A special kind of “what if?” is to question a constant. Or, more specifically, to question something that you assume is a constant. This thought experiment can shed light on possible changes long before they happen, which can broaden your understanding of your risk exposure.

What is a constant?

A constant is something that is held as unchanging. We see constants in software development, business rules, and scientific experiments. Sometimes they are facts, such as the speed of light, the acceleration of gravity, or the number of hotel stays required to achieve a higher loyalty status. Other times, they are used as a convenience to simplify calcuations or planning: “for the purposes of this exercise, let’s hold the number of customers at 10,000.”

In the real world, constants … sometimes aren’t. People change jobs. Housing prices fall. As do nations. And, much to the detriment of predictive models, a pandemic may suddenly and dramatically change consumer spending habits in a way that invalidates years’ worth of historical training data. Assuming something won’t change – treating it as a constant – is a way to lull yourself into a false sense of security.

Is it a number, or a distribution?

Let’s say that you’ve built a system that performs some calculations. You’ve probably defined some numeric constant somewhere, and the system passes that value into various formulas as part of its daily operation.

Have you ever tested different values of that constant? If you haven’t, you can start now. Change the value of the constant. When you rerun the calculations, what other changes do you see? And what impact does that have on the system as a whole?

Testing that kind of change just once can be eye-opening. It would help even more to test with a variety of values. Instead of picking new values by hand, you could build a simulation which tests the system on a wide range of randomly-chosen values of the no-longer-a-constant.

Even in a simulation, you still have some choice over the definition of “random.” We can borrow the idea of a random variable from statistics. Unlike a variable in software, which holds a single value, a random variable represents an entire statistical distribution. It returns a different value, from the same statistical family, each time you call it. You don’t know exactly what number you’ll get but you have a rough idea of the scope and “shape” of the possible values.

Most people will default to a Gaussian (normal) distribution for testing the no-longer-constant: “it should still be reasonably close to 7.5, but it may vary just a bit. So let’s set the mean to 7.5 and choose a very small standard deviation.” You could also pick from a uniform distribution, in which there’s equal probability of any value within a given range. You have as many choice as there are statistical distributions, really, so you can get creative.

Are these the only options?

Another flavor of a constant is to assume a fixed number of possible outcomes, or a fixed range of input values.

Consider the statistics textbook classic of rolling dice. A die has six sides, each of which has an equal probability of coming up every time you roll. This is an easy way of splitting workloads into six different, balanced queues for processing.

Sort of. It’s possible for the dice (or an equivalent random-choice system) to be artificially biased. This loaded die will return one side more often than any other.

A sorting system that relies on an unbiased roll is at the mercy of honest dice. If you never validate that assumption by checking that queue loads are roughly equivalent, then your work-processing system will become imbalanced.

Similarly, consider a system that can handle any mix of results from the dice, but still expects integer values in the 1 - 6 range. What happens, for example, when that system suddenly gets a 7? Can your code handle that? Maybe some component upstream passes in a 3.5. Will downstream code react poorly because this is not an integer value? or will it round it up to 4, thereby obscuring the problem in the upstream component?

This may seem like a trivial exercise, but the idea of assuming a fixed range of values has caused large-scale trouble in financial scenarios. Consider the Black-Scholes formula for options pricing, which economist Paul Samuelson critiqued as follows:

“The essence of the Black-Scholes formula is that you know, with certainty, not what the deal of the cards will be but what kind of universe is being sampled, which gives you the assumption of the log-normal process.”

(Excerpt from When Genius Failed, p70)

It’s wise to test what values your systems can handle, when possible. Barring that, you can develop constraints to reject unexpected values before they make it into any calculations. Some scenarios may require that you establish alert systems, so you know when the system is operating outside of its norms.

Bringing it back to the real world: situational constants

I’ve been exploring this idea in terms of numbers, but the notion of challenging your constants also applies in the physical world. As with the numeric examples, treating your life’s constants as matters subject to change will make you more adaptable and less prone to surprises.

Your office location, who heads your company, the legality of your business model, whether people will be able to enter the office to work… How many of those do you tacitly assume will never change? And what happens when they do? Even by a just a little bit?

Exploring this can seem daunting but it takes just two steps to start. First, replace “always” with “most likely” and “never” with “shouldn’t.” With that change in your vocabulary, you’ll train yourself to accept and work through “what else?” kinds of questions.

Next, having mapped out other possibilities, ask yourself: “how will I know when the situation has changed?” Knowing what can happen is of little value if you can’t detect it in time.

Exploring these questions – challenging your life’s constants – is the first step to uncovering risks. And uncovering risks is the first step to mitigating them.