On Design Report Structures and Different Kinds of Prototype Tests

When presenting the results of a design project, including a prototype test, I tend to recommend this chapter order:

  • Concepts and Selection
  • Final Design
  • Test / Validation

This order is based on typical peer reviewed papers presenting the ‘design and validation’ of ‘a novel device’ or something. It also assumes a substantial difference between the final design and the chosen concept that is not the direct outcome of exploratory testing with a prototype. This order works well when the test is aimed at validating a specific part of (the performance of) the design. The final design, in this set-up, functions as a type of hypothesis, that is then empirically tested.

In a course that I teach where students usually dive right into prototyping after concept selection, this order doesn’t always work. And it’s confusing for them. Especially for those students who end up effectively using (early versions of) their prototype as a sketch model to discover things about their concept and to iteratively develop their design. There is also little time available in this project (and too little technical knowledge amongst these particular students) to really develop the design as a whole very much after the concept selection.

In these cases, it would probably work better to change the order:

  • Concepts and Selection
  • Prototype test
  • Final design

You might even skip the ‘final design’ section entirely in favour of a discussion of future development. The prototype test, then, becomes not so much a focused validation of one key element within a larger complex design, but more an exploration and/or proof op principle of the chosen concept, more a validation of (the choice for) a certain solution principle than of a full design.

The role or lack of a client distinguishes academic from professional design work

Reviewing a number of (engineering) design textbooks, it strikes me that none of them discuss what a good set of concepts looks like, other than that they are the most promising options.

Together with the fact that these textbooks give little to no guidance on how to construct and present the complete case arguing the final design, this lack of discussion on the collection of concepts as a collection – and what defines it as such – seems to be a result of these books’ framing of design and the design process in a professional context.

One big difference between that professional context and an academic context (including many educational settings) is the role and presence – or lack thereof – of a client. Concept selection seems like a particularly good example of this. In a professional setting, you would present your concepts and your evaluation of them, together with a recommendation on which to proceed with, to your client(s). You would give them the final say or ‘OK’ on that decision, or at least come to a consensus. And because that decision is taken together, at a specific moment in time, in a specific project context, it matters less whether that set of concepts has a particular logic to it.

In an academic context, however, if you present concepts and a comparison at all, you present them only at the end of the project, together with the – further developed – final design. You write it all up in a single (peer-reviewed) paper. In that context, where you’ve selected a concept yourself and already further developed a design based on one, the concept comparison and ‘selection’ is no longer a forward-looking strategic proposal but a component in the justification/support for your final design. Rhetorically and epistemologically, it’s doing (can do) something quite different.

The Format of Design Project Reports Leads Students to Develop Misconceptions

This is an unstructured, thinking-out-loud exploration of pretty much the same point as I made in Design Reports vs. Design Papers.

I think there may be a fundamental problem in the way we ask students to produce design reports that is making it unclear what the lessons are, exactly.

Design reports in education serve two separate functions: presenting/justifying the design proposal and showing that the student(s) did the work. This combination leads students to develop misconceptions, I believe. Because the reason we want to see some things in their reports (evidence of their process) is that we want to check whether they did and learned from applying the methods we ask them to practice. And the reason for wanting other things/properties in their reports (a consistent and coherent argument with only the evidence relevant to that argument) is another one: because that is what is necessary for a convincing outcome.

We judge things like a morphological chart (especially in earlier projects) based on criteria relevant to how the student is developing their approach/process. But those criteria are not quite the same as the ones relevant for a judgment as to how convincing the overal result of final claims are.

Another way to phrase this difference might be the difference between efficiency and effectiveness. We want students to develop an efficient and effective process, but the value of this is instrumental. In the end, only the effectiveness counts when we’re judging design proposals.

This tension or difference also becomes apparent when we compare student design reports with published papers reporting the results of design work. In a paper or presentation to critical peers, it is not a relevant question whether you wasted time or not. The only thing that counts is the final design, what claims you make about it, and what evidence you have for those claims. Much is left out that we do ask students to show in their reports. And this is a difference in kind, not just a difference in level, depth, detail, or quality.

This difference also highlights the contrast between design as an academic discipline and design as professional practice. In industry, efficiency, risk management, effective use of time and resources are important. Satisficing strategies are often appropriate. Academic values are different. There, understanding, logical consistency, accuracy, and other goals are more important. Aiming at ‘complete’ exploration and mapping of options is more important in this context. And tolerance for leaving certain practical matters in the design for later and focussing on a core, novel working principle first is far higher.

Three Ways of Justifying Design Features

Yesterday, in a discussion with a student on how to structure their design report, I found myself constructing a little typology of three types of justification for design decisions, each with their own rhetorical structure and form of presentation.

First, a particular feature of a design can be selected from alternatives developed in parallel. We do this at the overall level with concepts, usually three of them. These alternatives do not follow from the other, but are developed independent of each other, they are explorations of different approaches, and each represent a different set of trade-offs. Sometimes, these are developed in a sequence, one after the other but they are sufficiently independent of each other that they could have been developed in parallel, as three alternative answers to the same design problem, and so that each option can be evaluated using the same set of criteria. You can also do this at the level of details. Alternative ways to construct the frame, for instance, or different options for a hub assembly. In a report, you’d present these options side-by-side, with an argument for why one of them is the better choice.

Second, design features or geometries can be the endpoint of a single-track, iterative exploration or evolution. In this case you also have a number of alternatives that were considered, but they are not equivalent, and could not have been developed independently, in parallel. Instead, they form a sequence, where an evaluation of the strengths and weaknesses of each iteration forms the argument for the next one. The criteria used to get from one step to the next might differ from the considerations that led to the step after that. In a report, you can present the main stages of such an evolution, arranged chronologically, together with an explanation of the dimensions, features, or phenomena that turned out to be the most relevent, and how they shaped (and justify) the final form and properties of the part or construction.

Third, design features can also be the outcome of calculations that determine their correct or optimal value. Such design decisions may also have gone through iterations, or have been considered next to alternatives, but that history is no longer relevant for arguing the final outcome. Such decisions (a gear ratio, the length of a lever, the thickness of a beam) are best and most clearly justified by presenting a mathematical model, or formula, incorporation particular assumptions, constraints, and safety margins, leading to a single correct or optimal value.

Design Reports vs. Design Papers

One of the things I find difficult in design education is the difference between teaching our students the skill of doing design – coming up with and developing products, machines, and other plans – and teaching them the logic of how to argue for the results of that work – presenting, justifying, and giving reasons for their proposals.

We teach our students (some version of) the design process, and then we ask them to write a report that presents that process and their design. There is a tension in that combination. In this set-up it seems logical to show how your process ‘led to’ your design. Showing your (cleaned up, idealized) process is treated as the justification or support for the final design. But the quality of your process is not necessarily evidence for the quality of your design. Vice versa, with this approach it doesn’t make sense to present all your discarded ideas and other dead ends, or to show all seven and a half earlier versions of what became the final design. That would create a report that’s just as messy and chaotic as the average design process.

A ‘design report’ in this fashion tries to serve two functions: to provide evidence of learning activities, and to provide evidence for the final design’s quality. Those two sometimes conflict. At the very least they’re not the same and trying to do both in one document compromises the effect of both.

Perhaps, therefore, it would be good to make an explicit distinction between a ‘report’ and a ‘paper’? A report reports – it tells your teachers what happened. A paper presents – it describes a problem, shows evidence, and argues a proposal to a audience of peers.

If you want to see whether undergraduate students are learning the right skills and methods, ask them for a report. If you want graduate students to produce something similar to an academic paper, leave the reporting out of it.

Soft Spots in Design Arguments

A design is always presented as a means to achieve a goal of some kind, in a certain situation or context. To argue that the proposed design will actually do this requires a bit of a detour, however. First of all, goals are usually complex, ambiguous, and ill-defined. They need to be made operational in a set of objectively testable criteria (functional requirements, performance criteria, and constraints). Secondly, it is not obvious from the plans for an artefact how that thing will do its work, precisely. Its behavior needs to be predicted. Predicted behavior can be evaluated in terms of the operational criteria. This is the claim that designers can actually establish. It serves as a proxy for the actual motivation behind the design, the expectation that the design will actually achieve its goals in the real world.

The translation of a complex goal into an unambiguous, operational set of criteria is not straightforward. Different people can legitimately interpret the same goal differently. The argument for a design proposal needs to establish, therefore, that this translation is a good one. Does it capture all the relevant aspects? Is anything lost in the definitions and quantifications employed? Is it possible to formally meet these criteria, while clearly failing to achieve the actual goal?

Predicting the behavior and performance of the proposed system can look like the straightforward, rational, objective part of a design project. But this is not straightforward either. To predict something’s behavior, we need to model it. Models are always simplified, partial and idealized representations. Abstract models can be validated through controlled tests with a prototype, but tests also only pick out parts of the actual operation of a system, and prototypes are, like abstract models, partial, idealized representations. In fact, they often introduce properties that the actually proposed design would not have. Here as well, the argument relies heavily on judgements of definition, translation, and interpretation.

Discovery and Justification in Design Proposals

What is the logic of design proposals? What argument is or needs to be made when you present a design? What is it that a design proposal does and what criteria must it meet to perform this function?

Engineering can be contrasted with science in that it is not only descriptive, but also prescriptive. The goal of a scientific paper is to describe and explain the world as it is. An engineer’s design prescribes or at least proposes what should be done or changed in the world: ‘if you have a certain goal, then here is a plan to achieve it’.

This makes a design proposal, in rhetorical terms, an argument about policy. Much of it may be concerned with facts and causation, in the end it is a question of means, ends, and value. Such an argument is always relative. The proposal can be compared to existing options, alternative proposals, and to leaving the situation unchanged. And while scientific claims aim at universality, designs are always context-dependent, appropriate to a specific time and place.

If this is the argument we need to make, how do we argue it?

Continue reading Discovery and Justification in Design Proposals

What About the Logic of Design Proposals?

Next to a description of an artefact, plans for its production, and plans for its use, the product of a design project must always be a design proposal. There is the rare case where a design “speaks for itself”, but even in that instance, what that design says amounts to an argument that proposes the design’s actualization. And in arguing for a boss, client, or teacher, to make it like this, automatically means to not make it like that, or to leave the world as it is and keep making the same thing as before, or to make nothing new at all.

In practice, the goal and measure of success of such a proposal is that it persuades. In academic circles, we should instead be interested in whether the argument is any good in terms of its logic and evidentiary weight. Also in practice, however, those on the receiving end of a design proposal will want to judge how successfully the arguments offered actually justify a belief in the value of the design under consideration, and to poke through any rhetorical flourishes and salestalk that may be involved. In fact, I would argue that engineers –as opposed to those with sales and business titles– are under a moral obligation to strive for the same: an honest presentation of the merits of a design, accurate rather than merely giving the impression of accuracy. If “trust me, I’m an engineer” is to remain a valid request, we should strive to be trustworthy.

What is the logic of design proposals? What, exactly, are the claims that are made when designers present the results of their efforts? And how are and can these be justified?

Is the result of design always a proposal? Do designs published in academic journals fit this description?

At first glance, they don’t. Their message is more “Here is what we made. It’s really good/interesting/valuable/impressive.” But isn’t this the same as saying “This is how we should make these kinds of things for these kinds of situations.”? Or, “This is how we should solve this problem, or reach this goal.”?

A Project on the Logic of Design

What I might do is to first, study how designs are justified and argued for, second, to analyse this logic of design, and third, to see what implications this might have for how we publish designs, present design proposals.

Designs always come with an argument. In professional practice, you almost always say “here is our design, and here’s why you should invest resources to produce it”. In academia, you say a subtly different thing, “here is our design, here’s what makes it new and unique, here’s what it’s good and bad at, why you might want to make something similar, etcetera”. It’s much less a proposal than a set of claims in academic work.

Come to think of it, I’d be curious what the answers would be if you simply asked engineering designers at universities the question “why do you publish your work?”, “Apart from being cited and adding to your list of publications, what would you like the effect of your publications to be?”, “What function does it serve to publish designs?”

Then again, do we really publish designs? Or is it that we publish about our designs? Yes, the basic principles and construction methods are usually described in a paper, but it’s far from the design information in patent applications, let alone from open source software.

How to argue that a proposed design works?

The ultimate evidence is, of course, to show the actual device working. But often it requires the (risky) expenditure of scarce resources to physically build a designed system. This means that a designer or design team must convince the gatekeeper for those resources (a manager or project lead, for instance) that the design that currently only exists on paper is likely to actually function and perfom as intended.

In a student project, the situation is slightly different. Here, it will be the students themselves that will build their design, not uncommonly at their own expense. So why do we still ask them to convince their teacher that going ahead is the right decision? In this situation, the risk is not the financial cost of a failed prototype but the lost time and opportunity in the course. Failure during a course will lead to less learning, more effort on the part of the teachers, and at worst a need to take the course again for the student.

So how does arguing for a design ‘on paper’ work? First of all, before we can get to whether the argument is convincing, for it to be sound, it needs to be clear what is being claimed. This means that it must be clearly stated what the intended function is, why it’s valuable or desirable, what the requirements and restrictions are, and also what performance criteria should be used to judge the design.

Here, we get to three necessary claims:

  • that it works (what does it do?)
  • that it works well (what does good performance look like?)
  • that it’s the best you can do (are there no obvious and better alternatives?)

The first two of these seem at first glance to be relatively straightforward. Quantitative modelling, physical reasoning, and calculating expected values for the product’s features and performance seems what’s called for. But how do you argue the third point? How do you convince people that the proposed means to fulfill some function are the right, appropriate, or even the best means?

In my experience the answer given to this question is often a variation on “good, structured design process”. I agree that a ‘good’ process is the means to produce this argument, but it isn’t itself the reason. A rigorous process leads to considered alternatives, and it is comparison to alternatives that provides the persuasive force to accept this particular design as the preferable one. In fact, this is the only way, it seems to me, to argue for the appropriateness of a certain design to attain a certain goal. It is easier to produce appropriate alternatives through a structured, disciplined design approach, but how the alternatives are generated does not matter in the final argument on which design to accept.

The question of concept selection is distinct from the question of optimization (the second question of the three above). A clear argument about what performance criteria the design was optimized for, and that it is indeed optimized for these, only supports the claim that a local optimum has been achieved in the design. It cannot support the claim that other local optima (the best versions of designs that are fundamentally conceptually different) aren’t even higher.

This leads to the burden of proof for alternative concepts: as a designer or design team, you need to convince me that each of your concepts has been optimized towards its maximum performance, that you’ve reached the peak of the local optimum. Only after this has been established, can the concept support the further claim that another concept –with a higher expected value or performance– is preferable. For this you also need to establish that none of your concepts’ expected performance is above their achievable level, for example because an unsolved problem still exists whose resolution would detract from the quality of the concept.

Underlying a (small) set of concepts that are established as embodiments of local optima in performance there needs to be a further argument: that the concepts that were developed into complete (if rough, or abstract) design proposals represent the most promising conceptual possiblities. This requires some overview or mapping of all possible conceptual approaches to the design problem.

This entire edifice of design justification needs to be clearly presented, understandable, and accessible to a judge of a design proposal. They need to be able to go through each part and decide whether they are convinced of each part.