On Design Report Structures and Different Kinds of Prototype Tests

When presenting the results of a design project, including a prototype test, I tend to recommend this chapter order:

  • Concepts and Selection
  • Final Design
  • Test / Validation

This order is based on typical peer reviewed papers presenting the ‘design and validation’ of ‘a novel device’ or something. It also assumes a substantial difference between the final design and the chosen concept that is not the direct outcome of exploratory testing with a prototype. This order works well when the test is aimed at validating a specific part of (the performance of) the design. The final design, in this set-up, functions as a type of hypothesis, that is then empirically tested.

In a course that I teach where students usually dive right into prototyping after concept selection, this order doesn’t always work. And it’s confusing for them. Especially for those students who end up effectively using (early versions of) their prototype as a sketch model to discover things about their concept and to iteratively develop their design. There is also little time available in this project (and too little technical knowledge amongst these particular students) to really develop the design as a whole very much after the concept selection.

In these cases, it would probably work better to change the order:

  • Concepts and Selection
  • Prototype test
  • Final design

You might even skip the ‘final design’ section entirely in favour of a discussion of future development. The prototype test, then, becomes not so much a focused validation of one key element within a larger complex design, but more an exploration and/or proof op principle of the chosen concept, more a validation of (the choice for) a certain solution principle than of a full design.

The role or lack of a client distinguishes academic from professional design work

Reviewing a number of (engineering) design textbooks, it strikes me that none of them discuss what a good set of concepts looks like, other than that they are the most promising options.

Together with the fact that these textbooks give little to no guidance on how to construct and present the complete case arguing the final design, this lack of discussion on the collection of concepts as a collection – and what defines it as such – seems to be a result of these books’ framing of design and the design process in a professional context.

One big difference between that professional context and an academic context (including many educational settings) is the role and presence – or lack thereof – of a client. Concept selection seems like a particularly good example of this. In a professional setting, you would present your concepts and your evaluation of them, together with a recommendation on which to proceed with, to your client(s). You would give them the final say or ‘OK’ on that decision, or at least come to a consensus. And because that decision is taken together, at a specific moment in time, in a specific project context, it matters less whether that set of concepts has a particular logic to it.

In an academic context, however, if you present concepts and a comparison at all, you present them only at the end of the project, together with the – further developed – final design. You write it all up in a single (peer-reviewed) paper. In that context, where you’ve selected a concept yourself and already further developed a design based on one, the concept comparison and ‘selection’ is no longer a forward-looking strategic proposal but a component in the justification/support for your final design. Rhetorically and epistemologically, it’s doing (can do) something quite different.

What makes for a good set of concepts?

First of all, what is the aim?

Here, I consider concepts as a means of exploration and – as a set – as the basis for arguing why the final design embodies/is based on the concept that is does/is.

What if we take the game SET as an analogy for how your concepts should differ.

When ONE aspect varies, you have something that looks like a controlled experiment. You’re changing one variable and seeing how that impacts the design’s properties and performance. Of course, when ‘one’ aspect is different, many more aspects will also be different. The world (and thus, physical artefacts) are infinitely complex.

When MORE THAN ONE aspect is varied, you either have to do the full combinatorics or find some way in which different choices in those aspects hang together (in effect, going back to the situation where only ONE overarching aspect is varied). Or, if there are no significant interaction effects between the aspects (sub-functions, domains, components) then it’s better to decouple them and decide per aspect which is preferable.

What of the case of the get-up-chair combined with two knee orthoses? What when you have A/X, A/Y, and B/Z? Could that make sense? Yes, I think so. In this case, there is a ‘wildcard’ concept. This could be a sound strategy in cases where there seems to be an obvious best option for one or a set of aspects (in this case: a powered knee orthosis). The function of the wildcard concept, then, is to check/justify that assumption. Trying to find the wildcard by asking ‘What do my two ideas have in common?’ can be a way to discover hidden or unconscious assumptions (and thus also, to find ‘more creative’ options). The emergency Covid ventilators also fall in this category, with only one departing from ‘modern, digital control system’.

(Examples from my slide deck of concept set examples)

Three Ways of Justifying Design Features

Yesterday, in a discussion with a student on how to structure their design report, I found myself constructing a little typology of three types of justification for design decisions, each with their own rhetorical structure and form of presentation.

First, a particular feature of a design can be selected from alternatives developed in parallel. We do this at the overall level with concepts, usually three of them. These alternatives do not follow from the other, but are developed independent of each other, they are explorations of different approaches, and each represent a different set of trade-offs. Sometimes, these are developed in a sequence, one after the other but they are sufficiently independent of each other that they could have been developed in parallel, as three alternative answers to the same design problem, and so that each option can be evaluated using the same set of criteria. You can also do this at the level of details. Alternative ways to construct the frame, for instance, or different options for a hub assembly. In a report, you’d present these options side-by-side, with an argument for why one of them is the better choice.

Second, design features or geometries can be the endpoint of a single-track, iterative exploration or evolution. In this case you also have a number of alternatives that were considered, but they are not equivalent, and could not have been developed independently, in parallel. Instead, they form a sequence, where an evaluation of the strengths and weaknesses of each iteration forms the argument for the next one. The criteria used to get from one step to the next might differ from the considerations that led to the step after that. In a report, you can present the main stages of such an evolution, arranged chronologically, together with an explanation of the dimensions, features, or phenomena that turned out to be the most relevent, and how they shaped (and justify) the final form and properties of the part or construction.

Third, design features can also be the outcome of calculations that determine their correct or optimal value. Such design decisions may also have gone through iterations, or have been considered next to alternatives, but that history is no longer relevant for arguing the final outcome. Such decisions (a gear ratio, the length of a lever, the thickness of a beam) are best and most clearly justified by presenting a mathematical model, or formula, incorporation particular assumptions, constraints, and safety margins, leading to a single correct or optimal value.

Cost-Benefit Considerations in Design Arguments

In architecture, it may be perfectly acceptable to present predictions that are based purely on theoretical ideas about how people will behave and feel in response to a proposed building. Human behavior is so complex, and buildings so large, that such claims can be utterly impractical to test or otherwise validate. We have little choice but to trust the architect’s expertise, or accept an argument by analogy.

An engineer presenting a design for a novel surgical device, however, is expected to present a prototype that has been tested on simulated or even actual tissues, in addition to a theoretical model that predicts and explains its behavior. The physics of metal devices have been reliably modeled, and it is perfectly feasible to produce one-off prototypes and set up empirical experiments to validate these predictions with a reasonable investment of resources.

In design disicplines, we expect or do not expect certain types of evidence based on the possiblity and cost of supplying them. Engineering arguments are subject to cost/benefit considerations, similar to the designs themselves that the arguments are about.

Diminishing Validity of Concept Selection as an Argument down the Line

The detailed development, implementation, and operation of a design usually represents a significant investment. This makes it a good idea to first explore a number of possible approaches before committing to a single concept.

But concept selection is a strategic choice. The decision comes down to a judgement of which concept looks most promising, not to a determination of which one is certain to have the best possible performance. And at the end of a completed design project, you can never be certain that a choice to go with a different concept would not, in fact, have led to a better outcome. It is just that at the time, this concept looked best, and that therefore it was the one selected for further investment of development resources. Who knows what would have happened if those same resources had been invested into a different concept?

Soft Spots in Design Arguments

A design is always presented as a means to achieve a goal of some kind, in a certain situation or context. To argue that the proposed design will actually do this requires a bit of a detour, however. First of all, goals are usually complex, ambiguous, and ill-defined. They need to be made operational in a set of objectively testable criteria (functional requirements, performance criteria, and constraints). Secondly, it is not obvious from the plans for an artefact how that thing will do its work, precisely. Its behavior needs to be predicted. Predicted behavior can be evaluated in terms of the operational criteria. This is the claim that designers can actually establish. It serves as a proxy for the actual motivation behind the design, the expectation that the design will actually achieve its goals in the real world.

The translation of a complex goal into an unambiguous, operational set of criteria is not straightforward. Different people can legitimately interpret the same goal differently. The argument for a design proposal needs to establish, therefore, that this translation is a good one. Does it capture all the relevant aspects? Is anything lost in the definitions and quantifications employed? Is it possible to formally meet these criteria, while clearly failing to achieve the actual goal?

Predicting the behavior and performance of the proposed system can look like the straightforward, rational, objective part of a design project. But this is not straightforward either. To predict something’s behavior, we need to model it. Models are always simplified, partial and idealized representations. Abstract models can be validated through controlled tests with a prototype, but tests also only pick out parts of the actual operation of a system, and prototypes are, like abstract models, partial, idealized representations. In fact, they often introduce properties that the actually proposed design would not have. Here as well, the argument relies heavily on judgements of definition, translation, and interpretation.

Discovery and Justification in Design Proposals

What is the logic of design proposals? What argument is or needs to be made when you present a design? What is it that a design proposal does and what criteria must it meet to perform this function?

Engineering can be contrasted with science in that it is not only descriptive, but also prescriptive. The goal of a scientific paper is to describe and explain the world as it is. An engineer’s design prescribes or at least proposes what should be done or changed in the world: ‘if you have a certain goal, then here is a plan to achieve it’.

This makes a design proposal, in rhetorical terms, an argument about policy. Much of it may be concerned with facts and causation, in the end it is a question of means, ends, and value. Such an argument is always relative. The proposal can be compared to existing options, alternative proposals, and to leaving the situation unchanged. And while scientific claims aim at universality, designs are always context-dependent, appropriate to a specific time and place.

If this is the argument we need to make, how do we argue it?

Continue reading Discovery and Justification in Design Proposals

Asking Why

Design teachers continually ask their students: why? This is frustrating for the student and in the end, ineffective. Daniel Dennet’s two versions of “why?” may help us think this through.

Students interpret this question, I think, as “how come?” In any case, that’s often how they answer it. They start telling us about all the steps in their process, the changes, developments, and other design moves they made that culminated (for the time being) in this particular feature.

The teacher, I think, is interested in “what for?” What is the value or function of this feature? What is the effect? But often, there probably is no intended effect. This is just the first shape that came to mind, or the dimension that fit without causing any explicit problems.

Come to think of it, the student may very well understand that the teacher is asking “why?” in the sense of “what for?”, but when they don’t have an answer, they just start describing their “how come” origins.

And, in fact, it doesn’t really matter whether there is an intended effect to answer the teacher’s “why” question. The answer might be, no reason — yet. Because that’s why “why?” is an interesting and potentially productive question: what might or could the effect of this feature, nut, bolt, angle, or dimension be?

A well considered design is exactly that: rigorously considered. This means that for every ‘independent variable’, for every feature under the designers control, and thus everything the designer is forced to make a choice about, it has been considered what the effect is, what effects could be produced by varying this variable, whether these are positive and could be further strengthened or whether these are negative and could be minimized or compensated for somehow.

On Design as Research

Designing a building or product forces you to solve a range of problems, to answer a set of questions. A car needs an engine cover, doors, a trunk that opens, openings in the body for headlights, etcetera. A building needs a stable structure, doors, windows, insolation, waterproofing, perhaps floor levels, it should provide functional spaces, etcetera. There are issues to deal with at the level of the whole design, and there are parts, fragments, and details to work out.

Dealing with such a set of issues, and their interactions, conflicts, and overlap, leads to a thorough interrogation of the material or technology you’re working with. Some of the answers will be specific to this one design. But a few of them will be of more general value. They could become a standard component, technique, or pattern. A standardized detail, combination of techniques, or construction method, for instance.

Such experiments can test and/or explore. They can ask, does it work? Or they can ask, what if?