On Design Report Structures and Different Kinds of Prototype Tests

When presenting the results of a design project, including a prototype test, I tend to recommend this chapter order:

  • Concepts and Selection
  • Final Design
  • Test / Validation

This order is based on typical peer reviewed papers presenting the ‘design and validation’ of ‘a novel device’ or something. It also assumes a substantial difference between the final design and the chosen concept that is not the direct outcome of exploratory testing with a prototype. This order works well when the test is aimed at validating a specific part of (the performance of) the design. The final design, in this set-up, functions as a type of hypothesis, that is then empirically tested.

In a course that I teach where students usually dive right into prototyping after concept selection, this order doesn’t always work. And it’s confusing for them. Especially for those students who end up effectively using (early versions of) their prototype as a sketch model to discover things about their concept and to iteratively develop their design. There is also little time available in this project (and too little technical knowledge amongst these particular students) to really develop the design as a whole very much after the concept selection.

In these cases, it would probably work better to change the order:

  • Concepts and Selection
  • Prototype test
  • Final design

You might even skip the ‘final design’ section entirely in favour of a discussion of future development. The prototype test, then, becomes not so much a focused validation of one key element within a larger complex design, but more an exploration and/or proof op principle of the chosen concept, more a validation of (the choice for) a certain solution principle than of a full design.

Luddites, progress, and mansions

In the introduction to Blood in the Machine, Brian Merchant points out that the workers who started smashing machines at the turn of the 19th century had never been taught to see technology as inherently progressive.

We have. And not just that. Also that technological change – or ‘development’ – is both unavoidable and desirable.

For some reason this remark reminds me of the architecture students I’m working with at the moment who’ve been given a brief to design ‘a sustainable home’. Many of them are designing massive mansions. Constructed of rammed earth, or floating on water to be climate-resilient, but hardly ‘sustainable’. All of them, I think, are designing free-standing single household houses. And most wouldn’t even house a family, they’re one bedroom affairs, perhaps with a study.

Isn’t a sustainable home by definition a collective home? Something terraced, small, or built for collective or multi-generational living?

It really strikes me how little even those in a creative and in some ways highly socially conscious and critical field such as architecture seem to be thinking of redesigning the way we live. The form of our technology.

Perhaps that why that remark by Merchant brings up this experience: the ingrained assumption that the technology and design of society are one some fixed, natural, unavoidable path. We’re just along for the ride. With little more agency that to build a fantasy mansion or two.

Aspiration and the View from the Inside

The philosopher Agnes Callard argues in her book Aspiration that it is possible to want to become something you cannot yet understand. That it is possible to rationally pursue a way or view of life of which it is currently impossible for you to judge the value. For example, to aspire to become a music lover, a parent, the kind of person who enjoys long walks – or a designer.

There is a paradox here because it is impossible to (fully) judge the value of achieving such goals before achieving them. So how can you pursue them rationally, Callard asks. Their value is only properly visible from the inside, to those who have already become music lovers, parents, walkers, or designers – those who have already passed through the looking glass.

This may be a good metaphor to use when explaining this predicament to design students and teachers. That experienced designers have stepped into a world or bubble that can be described accurately, but only to those who are also inside. As if they’ve put on a AR headset and now see things the other simply doesn’t. Also similar to the difficulty of explaining or characterizing a new taste to someone who has never eaten a particular snack or food. There is a truth to how it tastes. Most people who’ve eaten the thing will agree to its character. But it cannot fully be explained in words to those who have never tasted it.

The Format of Design Project Reports Leads Students to Develop Misconceptions

This is an unstructured, thinking-out-loud exploration of pretty much the same point as I made in Design Reports vs. Design Papers.

I think there may be a fundamental problem in the way we ask students to produce design reports that is making it unclear what the lessons are, exactly.

Design reports in education serve two separate functions: presenting/justifying the design proposal and showing that the student(s) did the work. This combination leads students to develop misconceptions, I believe. Because the reason we want to see some things in their reports (evidence of their process) is that we want to check whether they did and learned from applying the methods we ask them to practice. And the reason for wanting other things/properties in their reports (a consistent and coherent argument with only the evidence relevant to that argument) is another one: because that is what is necessary for a convincing outcome.

We judge things like a morphological chart (especially in earlier projects) based on criteria relevant to how the student is developing their approach/process. But those criteria are not quite the same as the ones relevant for a judgment as to how convincing the overal result of final claims are.

Another way to phrase this difference might be the difference between efficiency and effectiveness. We want students to develop an efficient and effective process, but the value of this is instrumental. In the end, only the effectiveness counts when we’re judging design proposals.

This tension or difference also becomes apparent when we compare student design reports with published papers reporting the results of design work. In a paper or presentation to critical peers, it is not a relevant question whether you wasted time or not. The only thing that counts is the final design, what claims you make about it, and what evidence you have for those claims. Much is left out that we do ask students to show in their reports. And this is a difference in kind, not just a difference in level, depth, detail, or quality.

This difference also highlights the contrast between design as an academic discipline and design as professional practice. In industry, efficiency, risk management, effective use of time and resources are important. Satisficing strategies are often appropriate. Academic values are different. There, understanding, logical consistency, accuracy, and other goals are more important. Aiming at ‘complete’ exploration and mapping of options is more important in this context. And tolerance for leaving certain practical matters in the design for later and focussing on a core, novel working principle first is far higher.

Three Ways of Justifying Design Features

Yesterday, in a discussion with a student on how to structure their design report, I found myself constructing a little typology of three types of justification for design decisions, each with their own rhetorical structure and form of presentation.

First, a particular feature of a design can be selected from alternatives developed in parallel. We do this at the overall level with concepts, usually three of them. These alternatives do not follow from the other, but are developed independent of each other, they are explorations of different approaches, and each represent a different set of trade-offs. Sometimes, these are developed in a sequence, one after the other but they are sufficiently independent of each other that they could have been developed in parallel, as three alternative answers to the same design problem, and so that each option can be evaluated using the same set of criteria. You can also do this at the level of details. Alternative ways to construct the frame, for instance, or different options for a hub assembly. In a report, you’d present these options side-by-side, with an argument for why one of them is the better choice.

Second, design features or geometries can be the endpoint of a single-track, iterative exploration or evolution. In this case you also have a number of alternatives that were considered, but they are not equivalent, and could not have been developed independently, in parallel. Instead, they form a sequence, where an evaluation of the strengths and weaknesses of each iteration forms the argument for the next one. The criteria used to get from one step to the next might differ from the considerations that led to the step after that. In a report, you can present the main stages of such an evolution, arranged chronologically, together with an explanation of the dimensions, features, or phenomena that turned out to be the most relevent, and how they shaped (and justify) the final form and properties of the part or construction.

Third, design features can also be the outcome of calculations that determine their correct or optimal value. Such design decisions may also have gone through iterations, or have been considered next to alternatives, but that history is no longer relevant for arguing the final outcome. Such decisions (a gear ratio, the length of a lever, the thickness of a beam) are best and most clearly justified by presenting a mathematical model, or formula, incorporation particular assumptions, constraints, and safety margins, leading to a single correct or optimal value.

How do you teach other things than the climate crisis in a time of climate crisis?

In April, I’ll teach a new round of the course ‘Designing Medical Technology’, an introductory design course for students in the BSc programme ‘Clinical Technology’. I’ve been making an effort in past years to collect assignments focused on improving healthcare in the Global South and to emphasize the downsides of high-tech high-energy healthcare systems.

I still feel conflicted about the course as it is. The goal is to teach basic design skills and have students experience a design process from problem definition through exploration and development to validation. But should you even teach other things than the climate crisis, in a time of climate crisis? If so, how?

One way to say ‘Yes’ to that is when you teach a skill that is valuable or necessary even to deal with the crises we’re in. And designing seems to qualify for that.

OK, so if we accept that it’s perfectly sensible to still teach design in the current situation, how should we teach that now? Because it’s certainly not a given that students are going to use their design abilities to deal with the climate and social justice crises. In fact, I feel there is quite a big risk that the opposite will happen if they exercise those skills within the current extactive and destructive system.

Design problems that lend themselves to introductory courses often call for a new or improved product. When you focus on functionality, reliability, or usablitity, the results (if designed by first-time novice designers) are likely more resource and energy intensive instead of less. Electronics are added; extra materials are used; whole new product categories are invented.

So why not focus on sustainability instead? Wouldn’t that be an easy fix? I’m not so sure. This is where the conflict is, for me. Because function, construction, and use can be straightforwardly and realistically explored and experimented with by naïeve designers (through sketching and modelling). And that experimental, explorative way of thinking (by doing and making) is at the core of learning to design.

I’m afraid that the (abstract) systems thinking required for going after more sustainable systems is both too complex and too difficult to make accessible in an understaffed, too short, introductory course. At the very least, my own toolkit of exercises, prompts, and instructions has developed around experimenting with the more basic industrial design domains. And therefore those do not lend themselves well for a systems-critical approach. Industrial design starts from the assumption that the ‘solution’ to ‘the problem’ is going to be an industrially produced, commercial product. Something that a company can market and turn a profit on. Something that does something new, or outperforms current products. Industrial design tends to lead to more.

But what we need, of course, is to start doing less. Use less energy. Waste less. Dump less. Rely on complex supply chains and cheap exploitative labor less. Fewer electronics. Fewer products. Coming to terms with the fact that our dreams about improved medical devices are often actually impossible as soon as we admit the rest of the world, and the future of our world, into the system boundary of ‘the problem’.

Critical Pedagogy and Engineering Design

So I’ve been reading a lot of Critical Pedagogy lately – Paulo Freire’s Pedagogy of the Oppressed, bell hooks’ Teaching to Transgress, and, currently, Jesse Stommel and Sean Michael Morris’ An Urgency of Teachers. I find myself both in strong agreement, nodding along and thinking ‘Yes!’, and at the same time with strong doubts about whether and how to translate it to engineering education, especially with first year’s BSc courses.

The question of how Critical Pedagogy applies to STEM fields has been addressed, but still: almost all the examples and proposals from its main proponents and practitioners appear to relate to liberal education humanities classes.

Design education, whether in engineering design, architecture, or other design disciplines, is a little bit in-between outright mechanics and maths classes and full-on humanities education. On the one hand, it is a matter of thinking critically about the world and your values and goals in relation to it, it empowers, it is already sometimes a liberating experience, I believe. But on the other hand – again, especially at the lower levels, in introductory projects – it very much has the feel of ‘training’ and ‘instruction’ as opposed to true education that is interactive and egalitarian from the outset.

As a design teacher, you do act from a position of authority, the authority of expertise. You have a skill, a set of abilities, that your students don’t yet have. They came to your faculty because they want to learn how to do what you do. And for that to happen, they need to submit to your instructions. First they need to do without understanding, before being able to look back critically and understand why it is you had them do certain things (Cf. Donald Schön).

Now that I’m writing this, I realize that student numbers make a big difference here. In a studio of ~25 students, it’s actually not so difficult to be truly responsive, to interrogate students’ ideas and ideals together as a group. With ~750 first year’s students, in 14 clusters of 8 groups of 6 or 7 students, together with a small army of coaches and student assistants, it’s a whole different story. There, you’re practically forced to put up a sort of obstacle course for the students to run through, egged on and managed by strict deadlines, and then to respond only in a much more limited way, only to selected work by a limited subset of students.

Perhaps the main obstacle to transforming my pedagogy, then, is simply the raw numbers? That would be ironic, as it’s exactly that massive quality of more and more classes that contributes to students learning to just do what’s required, to listen, and to adapt to how things are, instead of developing their own critical awareness.

Ranking, Evaluating, and Liking

I’ve been thinking and reading about ‘ungrading’.

I first encountered the argument against numerical grading of students’ performance in Sanjoy Mahajan’s course Teaching College-Level Science and Engineering, which links to Alfie Kohn’s The Case Against Grades. The term ‘ungrading’ was introduced by Jesse Stommel, who comes at the idea informed by the much broader notion of ‘critical pedagogy’. In a recent online presentation, Stommel mentioned the work of Peter Elbow, which led me to Elbow’s essay Ranking, evaluating, and liking: Sorting out three forms of judgment.

Now, I’ve read many arguments against grading — that it decreases intrinsic motivation, that grades are not effective feedback, that they do not express how much was learned, etc. — but Elbow states the case in a way I found striking:

Differences between student work are multi-dimensional.

Grades are one-dimensional.

Therefore, grades are mostly meaningless.

Peter Elbow, paraphrased.

When you put it like that, it’s so obvious! Of course grades feel like bullshit.

In the first part of his essay, Elbow argues that we should rank as little as possible. In the second part, he argues that we should try, instead, to evaluate — to provide feedback on multiple criteria. And he argues that teaching should include ‘evaluation-free-zones’, where students are free to follow their own judgement without worrying about what the teacher wants to see. I think Elbow is correct here, but this second part of the essay was hardly surprising, novel, or uncommonly insightful to me.

But then on to the last part, the part on liking. Elbow writes:

It’s not improvement that leads to liking, but rather liking that leads to improvement.

Elbow, P. (1993). Ranking, evaluating, and liking: Sorting out three forms of judgment. College English, 55(2), 187-206.

I found his discussion of the need for teachers to like their students’ work in order to be able to give good feedback spot-on. He’s talking about teaching and evaluating writing assignments, but the same goes for design projects, in my experience:

If I like a piece, I don’t have to pussyfoot around with my criticism. It’s when I don’t like their writing that I find myself tiptoeing: trying to soften my criticism, trying to find something nice to say–and usually sounding fake, often unclear. I see the same thing with my own writing. If I like it, I can criticize it better. I have faith that there’ll still be something good left, even if I train my full critical guns on it.

This!

I find it easy and natural to be excited and enthusiastic about student work. And I’ve always known that this was a big part of being able to teach well. But I had never quite put it together with my equally great eagerness to critique, to point out problems and possible improvements.

Good teachers see what is only potentially good, they get a kick out of mere possibility–and they encourage it. When I manage to do this, I teach well.

Yes!

Although I think I might need to make this more explicit to students, and to become better at pointing out what exactly I see that is potentially wonderful.

Let Them Make the Thing First

When introducing design methods, perhaps it would be a good idea to focus instruction purely on the product at first. What table, matrix, or whatever should they make? What criteria should that thing adhere to? Explanation as to why those rules apply, and what role the method can and/or cannot play in a design process seem to fall mostly on deaf ears at first contact.

Make the thing. Do it again in a different course. Reflect on the process of making and using it. Learn the theoretical considerations then instead of beforehand. Concrete application before abstract explanation. The other way ’round feels logical to teachers, but may simply not be very effective. You haven’t created a ‘time to tell’ yet (c.f. Daniel Schwartz)

Onderwijsontwikkeling: Programma van Eisen

Mijn eerste jaar als docent bij het ontwerpproject in het eerste kwartaal van het eerste jaar van Werktuigbouwkunde richtte ik me bij het programma van eisen vooral op het uitleggen van toetsbaarheid: operationalisatie en toegankelijkheid. Maar op het tentamen bleek dat studenten dat compleet verkeerd begrepen hadden. In plaats van dat ze keken naar of een criterium objectief meetbaar was (bijv. ‘zo licht mogelijk’), hadden ze vooral opgepikt dat er een harde grens moest zijn (bijv. ‘lichter dan de vorige versie’). Als er wél een grens was, maar géén objectief meetbare grootheid, dan rekenden ze het criterium goed (bijv. ‘makkelijker te gebruiken dan het product van de concurrent’). Vice versa hadden studenten begrepen dat criteria zonder harde grens, maar wel met een goed meetbare variable, niet goed waren (bijv. ‘zo goedkoop mogelijk’).

Het volgende jaar heb ik daarom niet alleen aandacht besteed aan toetsbaarheid, maar ook aan verschillende typen criteria. Het belangrijkst daarbij was het onderscheid tussen functionele eisen (‘Wat MOET het doen?’) en prestatiecriteria (‘Wanneer doet het dat GOED?’). Maar ook randvoorwaarden en specificaties had ik onderdeel van de stof gemaakt. Dit ging beter, maar het verschil tussen die 4 typen criteria kreeg ik maar lastig overgebracht.

Continue reading Onderwijsontwikkeling: Programma van Eisen