Normal for Norman? - Postmortem Writeup


Normal for Norman?
A Narrative Exploration of Memory in Virtual Reality

    “Normal for Norman?” is our attempt to build a short playable narrative experience that utilizes the unique qualities of Virtual Reality to explore the fragility of human memory as told through the perspective of an elderly man trying to remember the song he used to play on his trumpet as a child. We did this by creating an interactive environment that uses environmental storytelling through objects, audio and visual effects, and allows the user to “move” through different eras of their avatar’s life by focusing on specific objects to solve a simple puzzle and evoke the embodied perception of memory recollection.

Development

    We began our development on “Normal for Norman?” by trying to answer this question that we posed for ourselves: “How can we build a short, narrative experience exploring memory (and its fragility) that utilizes the uniqueness of Virtual Reality?” We knew that we wanted to tell an interesting story, and all felt that Virtual Reality provides a framework for creating incredible experiences, but we really struggled at first with deciding how to approach making an experience that could only be experienced in VR.

    We started by thinking about the ways in which Virtual Reality could be used to tell a story, and realized that traditional forms of storytelling wouldn’t be taking full advantage of the tools that we had available. First, we made the connection that Virtual Reality itself is a way of experiencing false memories. When we put on a headset and are fully immersed in VR, we are essentially experiencing things that aren’t really happening, even though we can clearly remember performing tasks or activities. This sort of embodied cognition is powerful, and is at the core of how we are using game mechanics and narrative to explore the nature of human memory.

    Our initial thoughts were inspired by our own experiences with friends and family members who have struggled with memory problems, and because of this we wanted to make sure that we did our research to treat the subject with care. Dementia, for example, “is an umbrella term used to refer to a collection of symptoms that can result from a number of different diseases of the brain” 1 such as Alzheimer’s and vascular disorders that affect specific parts of the brain. While we weren’t designing a story that depicts a specific disorder, our hope was that we could raise public awareness and empathy by allowing players to experience something similar themselves.

Audience

    With that said, our potential audience is still quite small. We selected the HTC Vive headset for its ability to create room-scale virtual reality experiences that people could naturally move through without needing to use navigation methods other than their own body. Exact sales figures are unknown, but it’s estimated that there are approximately 2.5 million headset owners as compared to approximately 2.5 billion smartphone owners2. Combined with the sell through rate of a popular VR game on the Vive platform of a similar genre being comparatively low, our potential reach is very small. How then, can we hope to raise awareness about memory and disability through our experience?

    Our hope is that as VR technologies improve, low-cost and highly immersive platforms that produce the level of immersion necessary to create Place Illusion (defined as the “strong illusion of being in a place in spite of the sure knowledge that you are not there3”) will become more prevalent. This would allow more people the opportunity to experience what we have built. Because this is not currently the case, and we have the luxury of making something without needing to return a profit, we made the decision that it was more important for us to tell the story that we wanted to than it was to spread a message of awareness to a wide audience. Those that do have access to the technology (now and in the future) should then have a more focused experience that is better able to translate what it is like to have difficulty recalling a memory.

Process

    As we were still learning the methods for creating virtual environments throughout the term, we focused on making rapid prototypes in Unity for playtesting often so that we could iterate on our design as quickly as possible. Our initial design of the project, which was of an entity living inside of someone’s brain that solved puzzles based on memory through several rooms that were related to deteriorating brain functions quickly proved to be far beyond the scope of what we felt that we could accomplish during the term. However, our physical prototype of the experience (where we acted out what we imagined the experience would be like) was a good exercise for determining the types of natural interactions we were hoping to include.

    After we had decided on changing the narrative to tell a story about a man who is trying to recall the experience he had playing his trumpet throughout his life, we realized that we would need help with implementing the interactions in VR when making our prototypes. We decided that we would focus on creating the prototypes using low-poly objects and primitive shapes, and that we would primarily use 3D and sound assets available on the Unity asset store and free resources online (with the exception of some unique assets we built ourselves). We initially included both Newton VR and Steam VR in our first prototypes, because we were unsure of how they would feel. And because we don’t all have continuous access to a VR headset, we initially split the development into three branches. One that could be tested in virtual reality, one that could be played using a mouse and keyboard (because Newton VR does not natively support a player controller), and one where we could start collecting and building our art assets for our final build.

Prototyping

    Our first technical prototype was an example room that had an object appear and disappear depending on when and where you were looking at it (defined in our code as “gaze”), and similar to the effect seen in “Sightline VR.”4 It was our very first attempt at playing with people’s actual memory and perception of their virtual environment, and would later influence the foundations for how the player would navigate the narrative.

    In our first playable prototype, we built a simple white room with four primitive objects and a box-like structure meant to test our basic puzzle design. We had significant difficulties with colliders when we tried to introduce our own scripts to allow us to combine objects (and thus putting together a trumpet and memory at once) – though we were able to solve this in later builds by reducing the number of combinable objects and changing the way that they are built.

    We then playtested this build in class, which provided valuable feedback on whether or not our interactions were functioning the way we were hoping they would. And despite the objective not being immediately clear, the action of trying to build the trumpet worked. We even had someone try and take it apart! This taught us that we were on the right track, but we soon learned that we had to make some significant changes.

Changes

    At this point, we knew that we had a solid foundation for a single puzzle, and a narrative that spanned at least four distinct rooms with their own puzzles and sequences, andmultiple distinct branches of code. With only half of our development time remaining, we needed to refocus. So when we sat down to discuss what to do for our second playable prototype, we made a massive restructuring of how the game was going to work.

    First, we decided that having our own scripts for controller input was unnecessary when Steam VR has an included interaction mode. Second, we removed Newton VR from the project because it seemed to be fundamentally incompatible with Steam VR if we were to use both concurrently, and Newton VR did not have a non-VR mode. Next, we had to decide on a universal structure for including assets in our GitHub for the project before deciding that each team member would work on a scene named after themselves to be merged later.

     Lastly, and perhaps most importantly, we made the decision to change how the game was going to work. We still wanted the player to experience aging, and the process of “moving” between different eras of their avatar’s life, but we needed to find a way to put this all together in a single scene. We discussed ways of having the player be the catalyst of change, by triggering events that would be based on specific objects that held significance to the character. We would then make our single puzzle (of putting together a trumpet) the entire game by hiding pieces in different parts of Norman’s memory.

    Our second playable prototype was our attempt to get this “movement” working by triggering a change (initially with a key command and later by using gaze) and having the objects in the room be replaced by those from another environment in the same scene. It worked surprisingly well as a proof of concept, but didn’t really do much in terms of letting the player interact with the environment. The scene that our art team had been working on did, and we needed to find a way to combine the two.

Putting it all together

    Our breakthrough moment came when we merged our different branches together before our public playtest. In previous weeks, we had been working within the same project on GitHub, while building individual scenes that all shared assets between each other. We spent nearly twelve hours building a “Main” scene in our Unity project that was intended to be the scene that we would all be working with for the rest of development.

    We started by cloning the environment that our art team had built alongside the development of the playable puzzle prototypes. We then used a series of empty game objects that we renamed as “managers” of specific functions (i.e. Game, Audio, and Lighting) that we could use to collect the script functions from our previous prototypes and apply them properly to prefabs, game objects and elements for our new scene. We also adopted the structure of our second prototype (with 3 rooms that objects are moved between based on triggers) to the more complicated environment that we now had by duplicating the room and starting to include more diverse objects and assets for each. These would become our “timelines” that the player would move through.

    Seeing the script that Tommy had created (that had previously only functioned with a few simple objects) move entire rooms full of objects seamlessly and subtly was a revelation. Following that, we incorporated Ben’s scripts for haptic feedback and for focusing on specific objects in order to “move” the player character backwards and forwards in time. This was intended to be a diagetic interface that allows the player to select game states naturally by using narratively significant objects they encountered during their playthrough. We had managed to make significant progress towards our original goal, and we couldn’t wait to get people to play it.

External Playtesting

    Ben and Jade conducted our first public playtest on campus, outside of the Tabletop Gaming student society at Goldsmiths with 6 participants from a mixture of different ages, genders and ethnicities as well as levels of VR experience and knowledge.

The Playtest

    Prior to the playtest, we discussed how we would make the playtest fair, precise and as enjoyable as possible for each player. Initially, we thought we would run a full playtest where the player would experience the whole game. We then realised that conducting a playtest such as this would take too long, and we wouldn’t be able to get as many players to test it. Additionally, we thought that this would be too difficult for inexperienced players, and therefore would impact our final results negatively. We eventually agreed to run a short playtest, in which after the players found all the trumpet pieces, the game would present a “Normal for Norman? Thanks for Playing!” screen.

    The playtest was done in 5 stages – Introduction, Playtest, Interview, Feedback form and finally the Debrief. We began the process by asking each player to come out one by one, to ensure accuracy in our results. They were asked to sign a consent form for the purposes of studying video footage captured during the playtest, and were given a brief introduction that didn’t give the solution to the players. We recorded both screen capture and external footage from a GoPro camera.

Feedback

    After the experience, players were interviewed in a separate area, and were asked to fill out an online feedback form before being taken into a separate room for a debrief on the full narrative of the experience.

    We broke the feedback questions into different categories that were graded using a Likert scale ranging from 1 to 5 (1 being extreme dislike, and 5 being extreme like of the item in question) that included Comfort, Game Design and Controls, Accessibility, and Recommendations.

    A summary of the reactions from both qualitative and quantitative results will be included here, and the full results can be found in the Appendix of this report.

     The top recommendation was to make the objectives clearer to users. Two participants scored the objectives as unclear, two were indifferent / unsure and the remaining two thought the objectives were clear. Additionally, many playtesters said that they had a different idea of the narrative than we had intended. This led us to believe that we had to make significant additions to improve the experience before release.

    Other feedback about colour scheme and lighting was taken into consideration because we want to make this an accessible and enjoyable experience for all. This includes people with sensory related impairments, as this may impact on their ability to use the application effectively. Other feedback about how items not being interactive and certain objects not doing what they would do in real life, (e.g. the vinyl player not working, etc) were negatively impacting the experience, gave us the knowledge we needed to work towards making the experience more credible.

Iteration and Completing the Final Build

    While our public playtest was composed of a small sample size, the feedback that we received from players proved to be an invaluable addition to our own experiences with our public playtesting build, and we set up a prioritized list of tasks to complete over the weekend before our live demonstration in class.

    To address concerns with the narrative and objectives, we set out to include the combinable scripts from our first puzzle prototype, record additional voiceover, introduce positional audio cues to draw the player’s attention to specific objects, change the lighting (both color and intensity) between each timeline, add more diverse and distinct objects and textures, and finally implement the trumpet melody that Alex had been working on that ties the thread of the narrative together thematically.

    We also needed to work on the movement between timelines, because the objects that we were using to trigger the change (a birthday card, a record player, and a cabinet) were all objects that the player needed to interact with. After the playtesting session, it became clear that we needed to address this, so that people could interact with the objects without accidentally triggering a memory change. To solve this, we created images of each room from each time, and hid the images in picture frames that players can interact with should they choose to, and have a clearer idea of what they are doing.

   And for issues with the interactions in virtual reality, we set out to make sure that every object in the game that should be interactive is, adjusted the colliders and weights for each object, and made sure that the experience was as stable as possible before release.

Final Demonstration

   For our final demo, we wanted to make sure that there was a puzzle using interaction with the record player, that the objective was clearer, that people can move seamlessly between timelines, and that the game displays credits when the experience is complete. Some of these changes still need work (as people are still not figuring out how to move backwards in time on their own without help, and the record player puzzle broke in class when another record was placed on top) but they serve as a baseline for how we would like to add on to the project if we were to return to it in the future. For the most part, feedback in class was very positive.

    And although we didn’t manage to accomplish every task we set out to do when making the final build, we are extremely proud of what we have managed to complete, and we couldn’t have done it without the expertise and effort of each member of our team.

Individual Contributions

    In this section, each team member will describe their individual roles, and how their contributions affected the entirety of the project. This will allow us to be more specific when discussing solutions to problems encountered during development, and provide an opportunity for us to showcase the structure of the project.

Matthew Deline

    My primary responsibilities for this project were project management, narrative and experience design, and writing and editing the majority of our final report (aside from the individual and playtesting portions). If you’ve played the game, you will have also heard my voice as Norman.

     As group leader, I started by organizing brainstorming sessions and meetings to decide what we wanted our project to be, and what roles each person would take during development. I created a Trello page for listing and assigning specific tasks, and have been responsible for making sure that we are staying on track. Throughout the project, I have organized meetings, internal playtests, development sessions, and acted as liaison between teams when schedules didn’t match up. I also organized, structured and scheduled rehearsals for each of the in-class presentations for the project.

    Additionally, I co-wrote the narrative with Alex, assisted with the selection of objects for telling the story, and was responsible for the main puzzle and player experience design. I assisted with the implementation of audio assets (like public domain recordings, voice over dialogue, and example audio for early prototypes), and recorded feedback for in class playtests of said prototypes. In essence, I was the glue that kept the project together during development.

Alex Fletcher

    As a member of both audio and narrative teams, my assigned tasks for this project included sound design, storytelling and co-collaboration for building 3D model assets. My experiences as a trumpet player and connections with musicians who have had to battle with memory loss, helped us create a story about a elderly man who is trying to recollect his past memories as a professional trumpeter.

    To serve this purpose, I worked with our art team to establish objects and textures that would make sense in the world of the story. We collected a number of free assets and textures from the Unity Asset Store for each individual timeline. For unique objects, we modelled them in blender (used an inbuilt addon ‘archimesh’) and exported as .obj (window and blinds). I also modelled a trumpet to scale in Blender made from multiple individual parts that was used in the final build as a central piece of the puzzle. I also recorded a melody (Nat Adderley’s “Work Song”) on trumpet to be used when the trumpet is played.

    When designing sound, I worked with Jade to collect different sounds to tell a story through ambient effects. In combination with the narrative, this creates an embodied nostalgic feeling through audio in VR. We did this by layering foley in Logic X by taking the individual sound bites from a royalty free sample website (zapsplat.com) and bbc archived radio shows. We then created two sets (for outdoor and indoor environments) for each room. In total we have six .wav files of around 2min20sec each that can loop, and tell their own story.

    For example in the final room we have elderly people talking about their memories in the background.  When you move closer to the window the outside sounds get louder and when you move closer to the door the indoor sounds get louder.

    In addition to environmental audio, I helped collect a set of audio effects to be used in interactions in the game, but we were unable to integrate it completely by the time of release. This would be what I would most like to work on if we continued development.

Tommy Graven

    Throughout this project I worked as a developer by building unique scripts that were critical for the game to function, fixed bugs, optimized performance, and worked together in person with the rest of the team to merge our work for each major milestone.

    I made my scripts as simple and flexible as possible by making intuitive variable names, including clear comments, loosely coupled functions so that the rest of the team (primarily the developers) could easily implement new changes when testing in VR, and easy for our modeling team to understand. This was a challenge for me, because I needed to plan for other team members, which is something I have not needed to do in the past.

     I started by implementing a FPS controller and made a script so you could interact with objects, and I used this controller throughout this whole project so I could iterate quickly between developing and playtesting so I wouldn’t need any VR equipment at all times.

    Then I built a script which makes it possible to combine objects together when they are colliding with each other, which we use for the trumpet pieces. This is done by destroying the original gameObjects on collision and instantiating a new gameObject at the point of collision based on which objects were collided. This was used as a base for our first playable prototype and improved by making the function a factor of distance between objects for our final build.

    To create the main mechanic which gives the player the illusion that they are seamlessly changing timelines when they trigger certain events, we needed to make multiple rooms with different interiors, each representing a different timeline. When an event is triggered (such as picking up a trumpet piece), objects from the current interior progressively disappear as the player is not looking towards them, and are replaced with objects from a new interior so as not to break the illusion of being in that place.

    To make this easy, I designed the script so that when working on a new timeline interior, we make a new folder of objects and drag and drop it inside a list of timeline interiors within the inspector of a special gameObject (called RoomManager). Modelers can then customize the interior by adding and removing specific objects in these folders.

    Because this script would only work with objects that were organized in a certain way, I built a recursive function that iterates through each gameObject’s set of components to change the rigidbody, collider, and mesh renderer settings depending on whether or not the object is active in the scene.

Jade Hall-Smith

    During this project, my role  was to do research on disabilities which affect the memory.  Because dementia is a main influence for our VR application, most of my research was based on how dementia progresses and what parts of the brain it affects in each stage.  This included  other neurological conditions including neurodevelopmental disorders which can also cause difficulties with the memory to show how other disabilities impact on memories.  

    My other responsibility was to use my my research in the narrative design, so we could create an accurate experience  That was sensitive to people who suffer from these disorders.I created some ideas for puzzles which manipulate audio sounds, relating to the issues with the temporal lobe, part of the brain which is affected during the early stages of dementia.

    I worked closely with Alex on the sound engineering, because we want players to really feel embodied in the VR application. To achieve this, I was responsible for documenting the sounds we collected from ZapSplat, Youtube and Archive.org. We also used this document to keep track of which sounds should be played and when.

    It then lay with me to organize and external playtest with a group from the tabletop games society, ensuring there was a diverse mix for better results. This included inexperienced VR players and people with VR games experience.  I collected feedback through a google form as well as interviews from each playtester to gather data for analysis to help us build the final iteration of our VR application.

Doruk Hasdoğan

    During our initial discussions for the project, I didn’t support the idea we chose. I felt like it was too narrative focused and wasn’t unique to VR. Even though I felt as though the idea could also be made for a first person puzzle room game without VR, by voicing my opposition to make the team think outside of the box, we have achieved something that needs VR. To do so, I drew from my previous experiences developing for the Vive hardware, where I became familiar with the assets we would need to get started creating our scene. This is why I decided to focus on level design and the visuals that players are interacting with most.

    I also created and organized the Github, was responsible for component and model implementation, and attached scripts to game objects in an optimized way. I started by creating a empty Unity project and included two assets, Newton VR and Steam VR. My initial plan was to use Newton VR’s collision sound framework (which isn’t implemented in the final build) with Steam VR’s player controller, circular and linear drive for drawers, and throwable objects for interactions. I made prefabs from Alex and Ece’s assets by adding colliders, rigidbodies, weight and drag properties, and relevant scripts.

    We used these prefabs while building our original version of the scene that was then used along with Tommy’s script to create our Main scene that has three distinct timelines. We placed the prefabs in copies of the room in the same scene corresponding with the objects in folder organization needed for the script to function. I used unity’s terrain, tree generator, and the built-in unity component “wind zone” to create a natural feeling outdoor environment. We briefly added depth of field, and motion blur, but when we realized that they caused severe simulation sickness we decided to stick with bloom lighting and ambient occlusion as our only post-processing effects.

    During our public playtest we got the feedback that the rooms weren’t distinct enough and testers weren’t realizing that the scene was changing. Additionally, we had trumpet pieces that existed in every reality, and severe bugs would occur when people put those pieces in weird places and then change timelines. For example, incorrect objects (like drawers inside of beds) would spawn in the scene. This would cause their colliders to get stuck and would launch the trumpet in a random direction outside of the play field, making the game impossible to finish. To fix this, we made and carefully laid out a significant number of new objects (like wheelchairs, and musical instruments), materials (like wallpapers, posters, and new textures), and completely new exterior environments (like the city). This made the effect much more noticable.

Ece Hasdoğan

   As a critical member of our art team, my responsibilities included: creating the initial scene design of the room, finding asset models with the help of Alex and Doruk, modelling in Maya, finding, creating, and manipulating textures in Adobe Photoshop, making materials and attaching components (like colliders, rigidbodies, and throwables) to gameObjects, prepared and organized the project’s prefabs, and assisted with the placement of objects in rooms within the scene to make them distinct.

   I have a collection of free assets from a previous project5 that I used as a base to create some of our initial assets by manipulating them significantly in Autodesk Maya to make them our own. Changes were made by making the objects low-poly, removing or adding detail, introducing structures, and creating collage 3D objects by combining several different details from my library. I also made completely new objects like the wardrobe, a basic square short table, a simple lamp, and more.

   One way that I would add textures to these models was by finding black and white photographs to represent what I thought memories of Norman’s life should look like. There are at least four of these images in the game, each showing different life periods (like marriage, family, and trumpet playing) that were distorted with blur and stain effects to give the feeling that the memories are unclear.

    However, during our public playtest we received feedback that the narrative flow was confusing so I continued to make new assets that told the story visually (like the black and white photographs) to make sure that our story transitions worked. I collaborated with Doruk to add new carpet textures, child drawings, coffee mugs, a house in the exterior environment, and more. I also made sure to adjust the colliders for all of these new objects (along with previous ones that weren’t working properly) to make the experience more plausible.

Benjamin Tandy

    My role within the team was to share C# programming duties with Tommy by implementing the VR interaction scripting outside of that afforded by the SteamVR asset and to focus on rapidly prototyping ideas for puzzles that we could test in VR. I also assisted Jade with the external playtesting session.

    My first task was to build an initial proof-of-concept prototype that showed household objects disappearing and reappearing outside of the player’s vision (when they weren’t looking) by writing a C# script that checked whether the object was within in the fulcrum of the “head camera”. When the object was not, it would alter itself or remove itself completely. This was a successful effect that Tommy extended to work on a much larger scale.

    I next built a simple experience using Unity and the NewtonVR plugin which allowed a player to pick up and try to combine parts of a trumpet that played different notes when the player held them to their mouth. We built it quickly over the course of an afternoon using primitives and playtested it the following week during class. It allowed us to confirm that players could understand the basic concept of picking up and attempting to combine objects in the context of our puzzle without explicit instruction in the form of a text or audio prompt.

    Because we had chosen to develop for the HTC Vive, we decided to build for room-scale tracking and attempted to design the rooms so that a teleportation mechanic would not be needed. We decided upon a space of 2.5m x 2.5m as a size which we could rely on for development and showcasing the project. However, we found that when the player was in VR, a room of that size seemed quite claustrophobic. We decided to use immovable furniture models (such as a bed and a coffee table) to be able to extend the room beyond the bounds that the user could walk around. A window was also implemented in the scene which opened up the space further by showcasing the exterior environments built by Ece and Doruk.

    We also found that we needed to guide the player’s attention to certain objects. To prompt the user to examine specific objects, I built a script that would dim the universal lights and bring up a spotlight when the player’s eyeline was pointed at them. I also dropped the volume within the scene to mimic a dramatic quiet moment that is a common method in film-making. This had a pleasing effect but ended up causing confusion during playtesting as it distracted from the primary puzzle of the trumpet pieces.

    Towards the end, I worked on implementing human hand models to replace the Vive controller models we were using before as they did not make sense in the historical era of the story. I worked with an asset from the Unity Store which was an untextured model that included animation of the hands gripping. When I attached the hands to the default controller prefabs from SteamVR, it took a number of attempts to get the position and rotation of the hands to feel natural. As the player’s hand grips the Vive controller like a gun, it was remarked during internal playtesting that felt strange to see your hands open inside of VR. We found that by rotating the back of the hand model to be roughly in line with the back of the players hand achieved the closest level of embodiment illusion we could hope to achieve.

    My final duties were to implement programmatic cues within the project to play Alex and Matt’s audio, build a script that would allow the player to place and play vinyl records on the record player model and to do final testing and bug fixing before project completion.

Conclusion

    In summary, we believe that we have managed to build a short, narrative experience exploring memory (and its fragility) that utilizes the uniqueness of Virtual Reality. We were successful in implementing several features of highly immersive virtual reality systems, like haptic feedback, positional audio, room-scale navigation, diagetic interfaces using objects in the environment, non-diagetic curved interfaces for changing game states, natural gestures for solving puzzles, and more. And while we may not have as many puzzles as we originally intended, we feel as though we were successful in our original goal by using these features to tell an interesting story in a way that couldn’t be told the same way in any other medium. As adoption rates for room-scale virtual reality systems like the HTC Vive increase, it is our hope that our experience (and others like it) could begin to see increased visibility, and see the additional positive effect of increasing empathy and awareness of those with memory disorders.

Files

Normal For Norman.zip 207 MB
Jan 13, 2019

Get Normal For Norman?

Leave a comment

Log in with itch.io to leave a comment.