Grad Studio II: Dexign Signals

jonah
30 min readJan 24, 2022

Personal Weekly Reflections

January 23 — Week 1

Since entering grad school, I’ve been a bit skeptical of the traditional designerly methods of group collaboration. Seeing everyone posting virtually identical pictures of post-its, whiteboards, and illustrated maps quickly made me cynical about the effectiveness of these methods for delivering real results. In many cases, I saw them as purely performative. At the same time, in team situations, I’m often not as thoughtful about collaborative meaning-making as I should be.

I’ve been slowly coming around the idea that I need to be more thoughtful about how the teams that I’m on collaborate and collectively build understand. Similarly, I’ve come around to thinking that traditional designerly ways of collaborating (such as unbounded whiteboarding sessions) might not be such a bad idea. While methods like whiteboarding can definitely be a colossal waste of time, the benefit is that they orient people around a shared understanding of what’s being said and reduce the chances that everyone talks past one another.

In the first week of this project. I’ve already felt the negative impact of not taking the time to think about how we collaborate and arrive at a shared understanding. Looking at the documentation of past groups, I see that many used their first sessions to visually map out the similarities and overlapping between their interests and proposed topic areas.

In our two team meetings so far, we’ve also discussed our preferred topic areas and areas of similarities, but because we didn’t have a method or activity to discuss this, we ended up with a less resolved direction for our team. In our next meeting, I look forward to thinking more carefully about how we develop shared understandings and build meaning as a group.

January 31, Week 2

I’m going to focus on teamwork again this week. Last week, I discussed how discussions could benefit from a whiteboard or shared visual point of reference. We tried something like this in a FigJam as we shared our initial research and it worked quite well. The final result of our discussion is below.

This week, I’ve been thinking about how I facilitate better team discussions. How I can contribute to meaningful team discussions, an encourage/allow others to participate in a meaningful way? One of my weaknesses as a designer is that I tend to be a passive in directing team meetings. If a meeting is going well, I tend to contribute and add to that positive, forward momentum. If a meeting is ineffectual, however, I have a difficult time turning it positive. If things are quiet, slow-moving, or chaotic, I have a difficult time making them a productive, lively, and positive.

A few strategies and techniques that I’ve been trying to practice this week for better discussions:

  • Asking for people’s opinion. Now that I have a sense of my team members, I try to ask for their opinion directly in team meetings if I’m curious to hear it or I think it would be valuable. In many cases, this isn’t necessary because the team member will speak up on their own. But in some cases, when I’m curious to hear what someone has to say and they don’t seem to be speaking, asking them directly helps the discussion forward.
  • Making shorter comments. I can tend to repeat myself or rephrase my point to try and make sure that I was clear and got my point across. More often than not, this just leads to me speaking for longer than I should (and probably sounding less intelligent in the process).
  • Ask direct clarifying questions to make sure I understand the point being made or where we are in the meeting agenda.

Week 3 — February 6

In our first two weeks, we’d cycled through ideas related to social rituals in the home, improving green infrastructure, and mobilizing young adults for climate action, and hadn’t landed on anything that felt engaging and well defined. It was Jamie, however, who first floated the idea of focusing on disaster relief, and improving sustainable disaster relief and community resiliency. To all of us, this felt like the direction we’d been looking for and we came into Monday’s class excited about pursuing this new direction. We were so excited by the idea that it came as a bit of a stock when Peter began to push back on the concept during the critique. He asked for clarification on how our project related to the decarbonization aims in the project brief. We tried to rephrase our pitch, but Peter returned to the same central point: You need to make the case that disaster relief is a bit enough carbon emitter to accelerate humanity onto path of zero emissions by 2050.

I came away from the critique with some serious doubts about our concept and worried that we might have to, once again, scrap our concept and find a new one. We collectively decided that this was not the time to walk away from our idea. We’d take the next two days to do some research and try to make as compelling a case as possible that disaster relief had a significant carbon impact that we could address.

Over the next two days, we found evidence that disaster relief efforts had an incredibly high carbon impact, with large scale humanitarian aid involving waste, transportation, and environmental destruction. Specific humanitarian operations such as the relief efforts in Darfur in the early 2000s and, more recently, the flood relief efforts in Bangladesh, have destroyed tens of thousands of acres of forest to make way for encampments and to burn for brick making. Perhaps most shocking, the carbon output from the relief efforts for the Haitian earthquake were greater than Haiti’s entire annual carbon impact. Furthermore, the evidence is clear that disaster relief will only become a more common occurrence that effects larger shares of the global population as the temperature rises.

We presented this research to Peter and Kristin on Wednesday and they were convinced of our argument, providing resources and ideas for us to pursue the topic further. The dichotomy between our team’s experience on Monday versus Wednesday underscored how important it is to trust yourself about the strength of an idea. On Monday, our idea was strong, but we presented it poorly, without strong evidence. It would have been easy to walk away at that point and convince ourselves that it was the idea, not our presentation of it, that was the problem. Instead, we recognized that we needed to concentrate on selling our idea, not finding a new one.

Week 4, February 13

This week our team conducted five expert interviews, two cultural probes, and synthesized our insights through an affinity mapping exercise. The interviews were remarkably helpful in understanding the space of disaster relief, volunteering, and social innovation. But an interview can be helpful and interesting, and still not lead to meaningful insights. The affinity diagramming exercise is what we landed on to synthesize our insights. It’s a technique that gets used a lot, at least in my grad school experience so far. But I’ve never taken the time to really think about it, the value it brings, the common drawbacks, and its nuances. So that’s what I thought I would do here.

Affinity mapping or diagramming is “a process used to externalize and meaningfully cluster observations and insights from research, keeping design teams grounded in the data as they design” (Martin, Hannington)

  • The Universal Methods of Design book suggests that designers gather 50–100 insights per interview. I can’t say I’ve ever gathered that many insights from a single interview. Typically they fall more into the range of 20–40. I suspect this may mean that I need to dive deeper on the interview process, continuing to probe and ask “why” on answers to get at underlying motivators and insights. At the same time, there can also be a lot of overlapping insights between interviews on the same subject. My instinct is to not write two post-its with the same insight but perhaps this would help to locate areas of agreement or common attitudes.
  • What do you do with those post-its that cannot be grouped into a category? Aren’t there inevitably going to be some stand-alone insights that aren’t part of a larger group? Are these insights still valuable?
  • How does one move up the ladder of connections? For example, if three insights share some common theme but they’re also loosely connected to another three insights, how would you show these connections? Maybe it would work to have insights grouped in a tree data structure, where there is a parent nodes and children nodes.
  • Inevitably, designers are going to latch onto certain things that they heard in an interview. We all have confirmation bias and are likely to find things in any interview that resonate and stand out to them. How do we stop that from obscuring what else it said in an interview? I suppose that one should rely on the notes or recording from an interview to try and make sure that we are capturing the totality of what was said, but oftentimes designers are working under severe time constraints and listening back to an entire interview may be too time consuming. Are there other ways that designers can try to quell our biases and preexisting beliefs when capturing insights?

All this said, I did find out affinity diagramming exercise useful in moving our project forward and collecting some meaningful insights that we could collectively agree on. Although, I do wish we had more time to interview more people (preferably those impacted by a disaster) and collect 50–100 observations per interview, such are the real-world constraints of design.

Week 5, February 21

Transitioning into the generative phase of this project, we need to start thinking in less abstract and more specific terms. How can we understand disasters from a survivors point of few? What is the journey of someone affected by a disaster? Are there any generalizable activities or is each disaster scenario unique? As a group, we decided our first step to understanding these questions is to look at categories of disasters: wildfires, hurricanes, floods, etc. and map out the journey of someone experiencing that disaster. For the purposes of this post, I’ll focus on wildfires.

Preparing actions

  • Making sure your signed up for local text alerts to dangers in your community
  • Having or participating in a preparedness discussion.
  • Practicing for wildfire scenario. Understanding escape routes.
  • Understanding the severity levels of a disaster
  • Clear your home and areas surrounding your home of combustible materials
  • Plan for how to communicate during an emergency
  • Know emergency supplies needed.
  • Conducting a roleplaying exercise to test out disaster roles.
  • “To simulate an actual event, the Prepare Your Organization tabletop exercise begins with an initial scenario description and proceeds with two scenario updates. Each phase of the scenario includes discussion questions to allow participants to focus on problem solving as a leadership team in a low-stress, consequence-free environment.”

Feelings

  • “We had learned from earlier scares that the first few hours of a fire are a chaos of conflicting instructions. Evacuation orders had been issued and cancelled, then issued again.” (Morris New Yorker)
  • Californians always knew about wildfires, but they were more of nuisance, now the season is seventy-five days longer.
  • Plans change rapidly bc direction of fire and evacuation areas change rapidly.
  • “There is something awe-inspiring about kindness on this scale, and it infected everyone. In those first few days, most of the volunteers who usually worked at the BackStretch were off doing animal rescue in the fire zone…and, without being asked, everyone just pitched in and took over.” (Morris New Yorker)
  • For every [person] who stepped up in the crisis, some old friend fell strangely silent.
  • Following the maps online to see if your home has been destroyed

Problems

  • Evacuation plans have not always accounted for the severity of the wildfires (such was the case in Paradise, CA in 2018)
  • Situation changes rapidly
  • Winds can launch embers away from the main fire and ignite spot fires. This can change evacuation plans rapidly.
  • Fires can destroy communications infrastructure, which can disable the alert system.
  • Traffic jams / gridlock can alert stall people from getting out. Cars get abandoned when they catch on fire or run out of gas in gridlock.

Sources

Week 6 — February 27

It was an all-over-the-place kinda week for team Citizen G and I leave it with a renewed sense of the importance of a shared vision. We began the week with a Sunday team meeting intended to outline our goals for the week. Our attentions and interests seemed a little scattered and we needed to agree on some concrete goals for how to spend our time. We settled on the following agenda for Monday’s class and to set our next set of goals following that.

In class on Monday, we still faced some tension in executing on our goals. Although we each came with research on a particular disaster scenario (wildfires, hurricanes, and rising sea levels), we couldn’t find a consensus on how to conduct the pain point mapping. I understood a pain point mapping to be general outline of an archetypal persons journey through a system or experience. From there, you identify points of convolution, frustration, or poor management. Other members of the team had difficulty understanding the utility of this exercise, since (1) a climate disaster is, by its nature, a traumatic, painful, experience — how could we identify pain points when the entire experience is one of pain? Other team members, myself included, felt that we could identify pain points related to human management of disasters, not the painful impact of the disaster itself. In other words, what pain points exist in the disaster management system? Eventually, we agreed to complete the exercise, but it still felt that we weren’t in agreement on our approach to the problem.

On Tuesday morning, we heard a lecture from Ashley Deal and Realynn O’Leary, Adjunct faculty and partners of a local design consultancy. They laid out their method for creating and conducting research inquiries. A lot of their work underscored the importance of clearly communicating project and research goals with the client. As the messy design process unfolds, it’s important to keep a clear, brief, and mutually agreed to set of goals. These goals help internally to design a targeted and precise research probe and externally help to communicate the delimit the goals of the research to stakeholders.

After the lecture, a few team members asked that we take time to draft and agree to our own set of project and research goals. It took longer than we may have expected, but I think that only underscored our divergent our ideas were. Eventually, we ended up on the following goals and objectives.

It was fascinating to me how quickly these goals changed the progress of our work. We were able to design our research probe with relative ease, referring back to our goals and objectives when we encountered a challenging decision. Going forward, I hope we can return to these goals, improving them when needed and using them as a reference point. Looking further ahead, when we begin the prototyping and evaluative research phase, I know that we should immediately frame our goals and objectives.

Week 7, March 13

We planned and conducted our generative research this week. Ultimately, our generative research consisted of three phases: (1) an online activity to discover what people wanted to protect in a disaster scenario; (2) a follow up phone interview with older homeowners to discover in more detail what they wanted to protect; (3) an in-person workshop that consisted of archetype and collective visioning activities.

I’ve listed out my comments on these three phases below:

  • Our collective visioning activity was difficult for participants to understand and partake in. They found it difficult to a strike the balance between using the physical town that we had provided them and imaging the details of the town and the scenario. In other words, we were asking them to be both specific (with the layout of the physical town) and abstract (in the exact circumstances and their solution) at the same time. Some participants found it difficult, or practically impossible, to strike this balance.
  • One participant quickly and unprovoked took on leadership role within the collaborative game. They were helpful in engaging people but left little room for others to participate. As a facilitator, it was frustrating to watch a participant take up so much speaking time and limit the participation of others. At the same time, I had no idea whether and how I should intervene.
  • Liz Sanders was absolutely right that people tend to imbue archetypes with their own beliefs and desires for themselves. It was intriguing to watch people describe their archetypes as people who complemented or reflected their own values and desires.
  • People typically take the path of least resistance. If you give them the opportunity to not be creative, say by using a stock image, they will take it. Make sure that if you want people to be imaginative, you don’t give them suggestive materials. They’ll be drawn to the suggestion and lose their creative power.
  • Generative synthesis is a category of this project that deserved more time and attention. Once the workshops were devised and executed, we spent very little time as a class discussing how best to analyze and synthesize the information gathered. Instead of thinking about what type of analysis suited the information gathered, each team retreated to commonly synthesis techniques. I would be interested to know what analysis and synthesis techniques are well suited to generative research results and/or how these techniques are selected based on the results that are gathered.

Week 8, March 20

The week began in a dash to finish our generative research presentation. We came into class Monday with a good idea of our insights from the generative workshop, but very little concept development. In class, Peter pushed us to create more substantial concepts to pitch for Wednesday’s presentation and to match them to the insights that we had found from our research. Ultimately, we presented three concepts on Wednesday, the first two enabling peer-to-peer communication for those directly impacted by disaster scenarios and the third a data collection tool for first responders and managers of disaster responses. Below are the storyboards that we developed for each of these solutions.

Arnold primary critique of our presentation was that our solutions were too limited to discrete and acute disaster scenarios. Instead, we need to think about a world in which disasters are an everyday occurrence and when some disasters, such as a drought, are drawn out over a long period of time, not limited to a discrete time. The next morning, Kristin gave us a similar critique. Kristin claimed that we should focus on the interface between small to medium sized community groups and large national/international humanitarian organizations. She thought the focus on individual action is too small scale for the problem space and that it would be better to create the conditions for for individuals to take many different types of actions rather than design for a single action. As a group, we determined that both pieces of criticism rang true to us, and yet we didn’t want to completely lose our connection to those people that are directly effected by disasters to focus entirely on organizational actors.

Instead, we settled on a loose structure of an idea. We wanted to explore a tool that would allow nonprofits to easily gather data on their work (impact, cost, cost-effectiveness etc) and use that data to help apply for grant funding. Individually, we set out to find what metrics were important for nonprofits to gather for fundraising and grant application writing, and how these metrics might be used to help secure funding.

In my own research, I found that fundraising is based largely on the reputation of the organization and the relationships that it has with those who control funding. Beside relationships and reputation, grant money largely depends on the evidence that an organization can present that it is having a positive impact on the ground. This means that an organization has to demonstrate how many people it impacts, how much it costs to make that impact, and how sustainable or scalable its impact is. In other words, there needs to be rigorous evidence that validates the work the nonprofit is doing. In example grant proposals, I also saw a tendency to include demographic information of the region they are trying to help, which are available from the US Census.

Grant proposals often go so far as to include line by line budgeting of how they will use the grant money. Take, for example, a group like Regeneration which wants to support Indigenous fire stewardship. When applying for a grant to support its work, Regeneration needs to provide a line by line accounting of how it plans to spend the grant money. Perhaps our solution could gather information that would help orgs account for how much money they would need. In the case of Regeneration, it could ask Indigenous people what they need to revitalize healthy fire practices. It could gather information on where indigenous people have been barred from stewarding their ancestral land.

March 27 — Week 9

We came a long way toward narrowing in our proposed solution this week. So I think it might be useful to give my own personal summary of our solution, if only to make sure that I understand it myself:

Proposed Design Intervention: A community-based monitoring tool that allows organizations to easily record and share information about community assets, vulnerabilities, and disaster preparedness measures.

Using the platform, invited members can tag local areas with information that will help them prepare for natural disasters, mitigating the damage of a disaster and the need for a carbon intensive humanitarian response.

This proposal stems from a few of our team’s generative research insights. First, it focuses on bridging the information gaps that often exist in disaster scenarios. We found that people are forced to make decisions based on imperfect and imprecise information, especially when advanced preparations haven’t been made and communication channels are limited or down. Our solution is a preparation tool that asks users to map community assets, vulnerabilities, and preparedness measures before a disaster. We also intend that our solution will function when internet access is limited or unavailable by storing data on a local device and syncing with other devices only when Wifi is available or a portable router can be used to enable an internet connection. This feature is in response to the feedback we received during our last presentation about the lack of reliable internet service in a disaster scenario.

This solution was also driven by our insight that relationships formed before a disaster tend to the determine the effectiveness of a response. This is a tool meant to encourage people to build the relationships and knowledge before a disaster occurs, such as a relationship with a community leader, first responder, or elderly/vulnerable community member. In this way, people are encouraged to collectively be the first responders that they need.

Finally, this solution was derived from a comment made by New Sun Rising’s Executive Director Scott Wolovich, who we had the pleasure of speaking with this week. Scott remarked on how an understanding of the community is the first and one of the most effective tools and local organization can have. It took New Sun Rising years of work to get out and understand life from the perspective of Milvalle residents, and that understanding is a huge asset every time it petitions for funding or support of some kind.

To wrap up the week, I put together some slides that I hope capture the general purpose of our proposed intervention.

Week 10 — Collecting Evaluative Insights

Our work this week was spent on two tasks (1) finishing up our futuring methods; (2) developing mid-fidelity screens for key user tasks; (3) collecting our evaluative work for this week’s presentation.

Up until now, we had delayed conducting our futuring methods as our concept was shifting so rapidly. Specifically, it didn’t seem worthwhile to conduct a three horizons analysis without knowing the precise behavior that we wanted to multiply. After narrowing in our concept last week, though, we felt comfortable thinking about how this solution might fit into the world in 2030 or 2050. As a team, we completed the CLA Analysis, Three Horizons Mapping, and STEEP Analysis activities, collecting the first draft on a whiteboard and then refining our insights digitally.

We began our mid-fidelity screens by determining the user task flows for when a user wants to create a map and when they want to add an additional point to a map.

These diagrams gave us a shared understanding of what screens we needed to develop, and so we set about developing them. Below are a sample of our first screens.

After discussing these screens as a group, we listed a few major potential improvements:

  • Get rid of the edit map or share map buttons. These options should be listed on each map card.
  • Larger call to action on the home screen
  • The carousel feature on the home page could get annoying if a user has a lot of maps shared with him/her.
  • List map collaborators vertically, similar to Google Drive UI, instead of horizontally.
  • When a user clicks “add point” it should be assumed that the geographic point is the user’s current location with the option to edit.

We integrated these changes into the second iteration of wireframes, which are pictured below.

Week 11 — Presenting Empact

The week began with a sharp pivot in our presentation structure. We were planning to structure the presentation around the methods we had used to arrive at our final solution, but Peter advised us on Monday that it would take much too long to describe these methods and their outcomes. Better to devote the presentation to demonstrate the purpose and functionality of the solution, he said. As a group, we quickly agreed to pivot the presentation based on Peter’s feedback and laid out a rough outline to do so. It would mean a lot of work before Wednesday’s presentation, but we had a lot of confidence in the strength of our idea and wanted to represent it in a compelling way. If I took one thing away from this week, it’s that presentations need to be thought of as pitches. You’re making an argument and aren’t afforded a great deal of time to do it. Viewers come to a presentation with a relatively short attention span and less interest in understanding the detailed evidence behind decision-making. Other formats, just as writing, lend themselves more to this detailed breakdown of decision decisions. Presentations, on the other hand, provide you less time to make an argument and so they become more about clarity, simplicity, and visual impact than comprehensiveness.

The storyboards that we presented, with illustrated personas on the left and screen mockups on the right, seemed to be the most impactful part of our presentation. They went a long way toward conveying our solution in a clear and visually impactful way. Oftentimes, personas are limited to a one-page summary or a name on a customer journey map. These are often one-dimension exercises that have little impact on design. Is a customer journey map really different when you call the customer “Scott” than when you call them “customer A?” Unless personas are linked to substantial market research, their value is as an explanatory device. A way to make the use case of the solution concrete and understandable is by tying it to an archetypal user. This is what we did in our presentation and it strikes me as the right way to use a persona.

The weakest part of our presentation was the last section, which tied our solution back to decarbonization goals. The rationale that we laid out in the presentation was valuable but was too vague to be truly convincing. Our team simply didn’t have enough time to connect our solution to specific key performance indicators. Since the presentation, we’ve already begun to discuss what these KPIs might be, which will make a compelling part of our final presentation and documentation.

Week 12 — Evaluative Research

  • As we build the EMPACT website, we need to consider the best way to pitch our solution. How do we communicate our solution in the most effective and compelling way? Would it be preferable to walk visitors through the features of our solution, or to illustrate a particular use case in a narrative format? Ideally, we’d like to include both: a list of features and a narrative walk-through of a use case, but our group agreed that the narrative component should take priority. Hearing the story of a specific community organization and how are solution addresses their existing problems does a far better job of case-making than a comprehensive feature list, which feels more abstract and removed from specific disaster preparedness issues.
  • As we moved into high fidelity UI, we’ve run into one question again and again: what is the mood we want our design to project. On the one hand, our app is dealing with the enormously serious topic of climate disasters, which inclined us to pursue a more somber mood in our UI; on the other hand, our app is focused on community building as a preparedness measure against disasters, which has more of a DIY, social innovation mood. These two moods project very different UI styling, but ultimately we leaned toward the DIY/social innovation look. We didn’t want our app to reflect the seriousness of a disaster since it’s not about the period of the disaster itself, but the period long before the disaster (that is, everyday life) when communities can instantiate changes that will prepare them for climate disasters down the road.
  • Preliminary user testing revealed a couple of key issues with our UI design: (1) Interactive elements were not called out with color or unique style as compared to non-interactive elements; (2) we didn’t have a proper accent color to designate the most important elements on the screen; (3) the search bar needed to remain consistent at the top of the page throughout for discoverability; (4) it was slightly counterintuitive for some users that you would enter different maps of the same area and see different points instead of one map with different filters to see different categories of data entries.

Week 13 — Evaluative Testing and Video Storyboarding

We began the week testing our mid-fidelity prototypes with Elaine Fulton-Harris, a community organizer at the Wilkinsburg Family Support Center. Her work focuses primarily on getting out residents to vote, but she also supports Wilkinsburg families with issues of food security, child care, and housing. Although her work is in disaster preparedness, Elaine was precisely the sort of community leader that we needed buy-in from for our solution to work. In the session, we presented Elaine with a clickable prototype and asked her to browse active maps and create a new map/initiative for her organization. I’ve summarized some of our top-level insights below.

  • From the moment we placed the phone in Elaine’s hands, we could tell she was uncomfortable with mobile technology. In many cases, we needed to prompt her to touch the screen and treat it as a normal, functioning mobile application. This may be somewhat attributed to our inexperience in running usability testing, but it also spoke to the technical literacy of many potential users. We needed to reduce interactions and onscreen clutter in order to make this tool as functional as possible with as little extraneous information as possible.
  • Call to action buttons were too small and weren’t high enough in the visual hierarchy of the screen.
  • It can be extremely difficult to get people to participate in community initiatives, Elaine thought that our app should do a better job of informing people what tasks or initiatives they can help with as prominently as possible.
  • Understanding the demographics of a neighborhood can be immensely important to community organizers. Often groups lack the capacity to parse through Census data or other publicly available information. How can this information be easily integrated in our app?
  • The phrasing of “members” isn’t exactly clear. We had intended to be people with editing privileges for the map but Elaine was unsure if a member denoted someone who was affiliated with the Wilkinsburg Family Support Center or the community writ large.

We integrated as many of these changes as we could into the next round of wireframes, which I’ve included a sample of below.

At the same time that we were making these changes to the screens, Jamie and I were shifting our focus to our proposed website, which aimed to tell the story of our app in an interactive web format. Although it cuts against the class norm of making a concept video to pitch a design solution, we thought the website would offer a more compelling, user-driven, and long-lasting way to pitch our solution. In my case particular case, building a website would provide an opportunity to develop and grow web design and development skills that I’d like to learn. The downside to this was (1) it didn’t lend itself to a presentation as readily as a video; (2) our lack of web development experience meant that there would be a steep learning curve. Below were some of my early iterations to explore the narrative possibilities of a web tool.

These screens gave me a lot of hope about the possibilities of a web experience. Sadly, they didn’t engender the same confidence in Peter or some of my teammates, and, ultimately, they advised that we switch to a video. This was not the decision I was hoping for, but it reflected the feelings of the team. The decision to shift underscored a lesson that I feel I’ve needed to learn over and over again over the course of this project: ideas are only as good as their communicated. Time and time again, we’ve run into issues as a team because we have trouble communicating our thoughts and ideas early, often, and clearly to one another. In the case of the website, I should have made it more clear that web development would be a challenge, and it wouldn’t allow all the functionality of a video built natively in Premier or After Effects. Setting expectations in this way may have helped my team members to understand how the development process would defer from a video development process, and perhaps help communicate the value of the site too.

Week 14 — Preparation for the Final Presentation

We made the decision at the end of last week to transition from a website to a video. The website presented more technical and presentational challenges. We couldn’t guarantee that the site could be built to the level of fidelity that we wanted, nor did it seem like the website would be as much of an asset in our presentation. On the other hand, we were confident we could get to a high level of fidelity with the video and it could help to sell the emotional appeal of our solution.

This was a disappointing decision for me. I imagined that a website would live on past our presentation as a much more compelling representation of the work we did. A web experience involves the viewer in a much more immediate and interactive fashion. A video doesn’t live on past the presentation. We are bombarded by such a high volume of digital content that asking someone to press play and watch a concept video is unlikely, especially if the video is trying to explain a concept rather than demonstrate flashy visual skills. At the same time, as we began work on the video, I began to see its appeal over a website: emotional power. The video afforded a way to engage with viewers emotionally much more easily than a website. Without the limitations of the web structure, we were able to more easily build a world for viewers to enter and understand our project. Since Empact helps users prepare for moments of disaster and uncertainty, we wanted to use the video to convey that sense of emergency to the viewer, using news clips to trigger viewers to think of the reality of a disaster. If done right, the strength of our video will be the ability to get viewers invested in the importance and need of our proposed solution.

In developing our video, we’ve run into the issue of what information to highlight. The video shouldn’t be longer than 2 minutes and a good portion of that will be spent setting up our problem and value proposition. So, in the limited time that we’ll be actually showing the mobile UI and the functionality of the app, what features should we include? We thought about including the creation of an initiative by a community leader, the addition of a data point by a community member, or the sharing of data gathered with an outside stakeholder. Each of these functions seemed key to the change-making process that our solution envisions: an initiative is created, volunteers help to gather data, and the data is used to make a compelling case and move the initiative forward. Ultimately, after discussing and storyboarding how these three features might be included in the video, we opted for the addition of a data point. It seemed to require the smallest amount of setup for the viewer and the addition of pictures and audio content to the data point would make it more visually compelling than just watching someone walk through a series of screens.

Week 15, Reflections on the Project

Following the final presentations this week, I feel proud of the level of fidelity and thought that we were able to put into the final prototype and concept. In the past four weeks or so since we began developing our final concept in earnest, we repeatedly adjusted things to suit the comments of people like Scott Wolovich and Elaine Fulton-Harris, who both work in the nonprofit community development space. I’m proud of how much our solution reflects what we heard from them and the ongoing conversations we had through the end of this project.

Ending this project, I’m left wondering about the process and all the smaller milestones along the way. Candidly, I wonder about the value of certain steps in the process, such as storyboarding so late in the semester, or the service blueprint that we submitted. Each of these steps had some kind of didactic value for us as students, but I’m left with doubts about the value they offered to the project. In fairness, the structure of this class put gave students near-complete autonomy for moving the project forward, which means that we bear a lot of responsibility for the flaws in our process. As a team, we could have done a better job of project managing and moving the project forward in small increments each week. At the same time, I think we would have benefited from some more coherent structure to the project and the smaller milestones along the way, which at times felt ad hoc.

We poured a lot of time into the video and UI of Empact. Ultimately, we ended with a fully clickable prototype and a refined concept video. On the whole, teams emphasized the same components of the project. A lot of final presentations offered a detailed walkthrough of a single digital interface. This approach gave viewers a tangible sense of what it would be like to use the product/service and how the final concept could operate in the real world, outside of an academic context. One or two teams, Capsule Closet in particular, went in a different direction. Instead of building out a complete interface and walking through a user task flow, they built a wider set of digital and physical touchpoints, each of the touchpoints was less comprehensive than the interfaces other teams built, but together they gave a better sense of how the service would operate in different phases and aspects of a user’s life. In the case of Capsule Closet, the app had fewer screens, but we got a sense of how the app would interact with physical interventions in clothing textiles and the at-home experience of getting dressed.

In our case, the user interactions were more limited to a digital interface, and the ones that did exist outside of the interface were more spontaneous and limited to a disaster context. Much of the work that we were proposing was about independently adding local data on a smartphone. There aren’t a lot of cases where Empact would be used outside of a phone, perhaps there could be touchpoints when a user presents the data to an outside stakeholder, or accesses relevant data in an active disaster situation, but we didn’t think about how this might take place outside of the phone interface. This is not to say that including other non-mobile touchpoints is an unalloyed benefit, quite the opposite. Watching the presentations, however, it was clear that presenting a wider and more shallow series of touchpoints can help sell a product. If I were to revisit this project, I would certainly challenge our team to design touchpoints outside of the phone: perhaps a printed infographic of hypothetical data gathered on the app that could be used to present to a stakeholder, or the materials for a workshop that would acquaint people with the Empact tool and how it can benefit. Spending some time to develop these touchpoints would go a long way to making our solution more concrete and achievable in the minds of our audience.

--

--

jonah
0 Followers

Masters candidate in Interaction Design at Carnegie Mellon University