Interactive storytelling: providing control to a consumer of online news video

Dissertation project researching different formats of interactive video storytelling in context of the ease and efficiency of production. This work was submitted for a module on online journalism and graded first-class  by Birmingham City University.

The dissertation also made reference to a series of blog posts outlining the progress on the project: Privacy, or securityThe text below is taken directly from my printed submission, and has been adjusted for online use. 

To read more about this project click here.


INTERACTIVE STORYTELLING: PROVIDING CONTROL TO A CONSUMER OF ONLINE NEWS VIDEO

1. SUMMARY OF THE PROJECT

For my MA by practice project, I have researched the production of interactive video to determine its viability as a format for online video that can allow its viewer to control the content they receive. “Online video journalism should not be thought of as ‘television on the web’. Or, indeed, as a lesser form of storytelling…it can also encompass innovative and creative ways of storytelling.” (Domokos, cited in Bradshaw 2011: 106). I produced a short video to introduce a debate of the relationship between privacy and security in an online context, in different formats, to learn more.

The idea behind the project was partly inspired by the general and widely noted shift in news engagement from newspapers and television to an online and social media environment. Changes in the consumer technological landscape have enabled consumers to watch videos anywhere, and at the same time, bandwidth has become cheaper and more plentiful with the cost of mobile data plans falling in many countries (Cherubini, et al.: 2016), therefore making online content more accessible.

“Online news video is particularly important to study because the format enables the production, distribution, and use of forms of digital news content that were previously more constrained by the limitations of hardware and connectivity” (Kalogeropoulos and Nielsen: 2017).

I originally set out to establish a method of presenting more of what is typically a cut interview without driving the viewer away from the original content. However as I began to identify a definition of interactivity, I realised that the project should instead broaden to include the viewer’s ability to control the direction of view within the video, as well as controlling the content on display.

The artefact I produced for this project consisted of four videos in varying formats, as is explained in more detail in the review of the process. In addition to the four videos, I wrote a series of blog posts on my website detailing the progress of this project on a weekly basis.

2. REVIEW OF THE PROCESS

“The meaning of the concept ‘interaction’ depends on the context in which it is used” (Jensen: n.d.). In order to produce an interactive video, it is important to define what interactive means. The word has multiple dictionary definitions; but for basis of my project, I have defined it in two ways:

  • The viewer’s ability to control the content displayed on screen, and
  • The viewer’s ability to control the direction of view within the video.

These two definitions offer interactivity in different ways, but achieve the same goal of providing the viewer with a degree of control over the content. To explore these further, I created the same video in four different formats. Firstly, I set out to produce a ‘default video’ without any form of interaction that would act as a primary point of comparison. I then developed two versions of this video that explore the ability to control the content, and a format that explored the ability to control the direction of view.

As detailed in the note on artefact compatibility, the video formats I resulted with are:

  • Default,
  • Interactive by online tool (Thinglink),
  • Interactive by online code (CodePen), and
  • Interactive by virtual reality (360º).

In the first method of interactivity, by producing the same video through my own developed code and using the ‘what you see is what you get’ (WYSIWYG) online editing tool, Thinglink, I was able to better my understanding of the development journey and the finished product as two individual entities. This was necessary because I found that my level of knowledge and skill in coding wasn’t enough to produce a artefact that would be adequate to evaluate critically in the context of format development. The Thinglink product allows for this evaluation, and also gives credit to the other thesis that an editing tool removes the inefficient production behind self-coded interactive videos.

The second method of achieving interactivity is the consideration of virtual reality video. This format has been adopted online for marketing, and social media purposes, but in a news context is still very new. The New York Times publishes daily 360º content, but usually as additional media to a long form article instead of being the primary content itself: “It’s not necessarily the full story, but it’s kind of an interesting view into something most people don’t get to see” (Widmer, cited in Cullen: 2017).

BBC’s technology programme Click produced an entire episode in virtual reality – giving inspiration to a production style for this project. By using specialist equipment available to everyday consumers, I was able to film content in 360º to allow the viewer an opportunity to control the direction of view. It was my hope that this would enable a more realistic viewing experience where, for example, the context of an interview would be felt at most by ‘providing them with a seat at the table’.

After producing some test footage ahead of my first interview, I found this emotional connection was going to be likely result, and the completion of the full virtual reality video confirmed this theory. “Young people don’t have televisions, and don’t give a damn about the broadcast infrastructure. The only thing they are interested in is content, and they are not fussed about how that content is created, it comes down to an engaging story,” (Mulcahy, cited in Scott, 2017).

However, there are disadvantages of using virtual reality media – specifically with access to the content. Web browser and website compatibility can effect the viewing experience. At present, as a Mac user, I could not use the built-in web browser Safari to view virtual reality content on the YouTube website and was required to download an additional browser. This is a similar issue on some mobile devices, where third-party apps were required to view content. It is worth noting that virtual reality content isn’t supported on television either, as there is a lack of control for the direction of view, however for the sake of my project, this specific disadvantage for television wasn’t appropriate as the original idea of an interactive video in any given format is still incompatible through the same need to have a form of control.

As highlighted in the project proposal, I intended to produce a primary video of between 10-15 minutes in length. In order to produce these videos, I would rely on the contribution of interviews to provide meaningful insights into the subject being explored. During production stages of this project, I realised that I did not require a full length video to demonstrate the interactive capabilities, and the difficulty in sourcing more than two interviews gave reason to cut the length of video down to approximately three minutes, which ultimately did not detract from the purpose of the video.

A benefit previously unconsidered until this point, was the impact of interactive video within social media. As noted by Cherubini et al. (2016) “The most successful off-site and social videos tend to be short (under one minute), are designed to work with no sound (with subtitles), focus on soft news, and have a strong emotional element.”

And with regards to interactive content, “An uptick in mobile video viewing continues to drive the popularity of interactive video and stress the importance of encouraging interaction…furthermore, if viewers respond positively to interactive video, they’ll be more likely to follow CTAs and may even share your video with their friends” (522 Productions: n.d.). A common call to action (CTA) on social media is to ‘share this’, as used by AJ+ (Appendix: Image 1), and instructs the viewer to redistribute the video to their connections, increasing the audience size.

Due to this ‘uptick in mobile video viewing’, and as previously noted, I have needed to test the compatibility with these videos for mobile viewing, and uploaded the Introduction to 360º video to Facebook for testing. Despite being uploaded to a private Facebook profile, I learned very quickly that it was not only viewable on both mobile and desktop viewing, but gained a positive reaction from viewers who appeared excited by the format.

3. REFLECTIONS ON THE MEDIUM IN A PROFESSIONAL CONTEXT

As a viewer of the products I created, at face-value, I believe that as a news product this method of interaction to control the content on display is not as innovative, nor as engaging, as the virtual reality product. From a production point of view, I discovered the production process proved timely and inefficient. The application of interactive elements could not be produced until after the video was completed, which means an ability to multitask production roles isn’t possible – true for both the self-coded and online tool versions.

Without the necessary skills and experience, or access to tools like Thinglink, the journalist would not easily be able to create such content quickly. And there is a need for speed for news: “Stories are published as quickly as they can be written” (Smith, 2007: 148). “So-called push technology…makes it possible for people to get their news virtually as it happens.”

In a direct comparison to its virtual reality counterpart, the 360º video in this project took significant less time to produce. And if a news package similar to what I made is not needed, it is still possible to upload an excerpt from a bigger production quickly – straight from a smartphone, which could eliminate any need to edit altogether.

But the idea of producing news within a virtual reality format has opened up a new level of context which I believe will change future development of online news. The ability to bring the audience into the scene is a significant upgrade to the existing model, where viewers of online and televised video are limited to watching. This creates a new emotional connection between the audience and the content: “In consuming this type of content, the participant experiences vulnerability and openness while living the story” (Hernandez, 2017).

Krogsgaard, who has looked at the use of 360º for interviews, notes: “The purpose was not just to give a voice to members of the French electorate who are rarely heard, but also to show their lives. And what better way to do that than to transport the viewer into their living rooms?” (2017).

I do believe, however, that future researchers should concentrate on presenting different types of news content within virtual reality in order to gauge its usefulness as a format. For example, a tour of the aftermath of a natural disaster may serve well in 360º, however a speech or press conference may not.

A consideration for future use of interactive video, specifically for virtual reality content, is the need for spacial awareness. During the production of news video, contributors sign release forms that confirms their agreement to participate. Aibel (1988) recognises the need to begin by acknowledging that our actions are subject to an ethical code and by recognising that we have a moral responsibility to the people who participate in our films. After interviewing Sir David Omand in his office, it became apparent that care needed to be done to ensure private information is not unintentionally made public due to the continuous framing you don’t have with other cameras.

As I discovered during the editing process, the viewer has an ability to ‘look down’, and if the camera is placed on, or near, a desk with sensitive objects, there is a risk of exposing private information. In my case, the camera has insufficient recording quality to create such issue, however equipment with higher quality would have a different result. Despite using my own contributor release forms (Appendix: Contributor Release Forms) for my interviews, I would argue that new contributor release forms and editorial guidelines on 360º production need to be established in news organisations using this format, that take the immediate environment into consideration. At present, codes of conduct do not explicitly refer to the unintentional publication of private information through photography – or indicate to such issue being possible.

However writing a code of conduct is not easy if it is to fulfil its aims. It should be short and easy to remember (Frost, 2007: 249), so developing additional, or extending existing, codes of conduct could have a detrimental effect on the video producer, unless written ‘unambigously’ and “says clearly and concisely what is expected”. The Independent Press Standards Organisation (IPSO: 2016) Editors’ Code of Practice rules “Everyone is entitled to respect for his or her private and family life, home, health and correspondence, including digital communication” – a note which indirectly concerns documents on a table in view of a 360º camera.

This concern is not only addressed by ethics, but also by shot framing, and the camera’s physical position within the environment. What I learned through general practice was that the ‘rule of thirds’ – a photography concept which separates the image into nine equally sized boxes to position content so to be more eye catching – no longer applies as a grid, both horizontal and vertical, but instead as only two horizontal lines. Since the viewer is able to change the direction of view, there is no optimum subject placement horizontally, so the vertical lines are no longer required (Appendix: Images 2 – 5).

In the 360º footage I filmed in the park, the upper and lower thirds featured no valuable content to the viewer, so the horizontal lines became a reference point of relevance. However, in the office environment where I interviewed Sir David Omand, the lower third was just as interesting as the rest of the scene. This is contrary to the introduction piece filmed in the television studio, where there was some value in the upper third. But in each of those instances, the main content was situated in the middle third.

Another previously unconsidered complication with virtual reality content is the ‘stitch line’ – the point in which the recorded video is joined up to create the spherical image. Aiming the stitch point toward the light source ensured a better exposure than having only one lens face the light source and that there is a need to avoid having your subject cross through a stitch point if they’re too close to the camera, which causes distortion of the subject as they pass between lenses (Widmer, cited in Cullen: 2017).

The next phase in developing interactive video will very much rely on audience analytics, and whether it can make money. An initiative set out by the New York Times in 2015 saw the publication establish a virtual reality team, in a bid to help it generate $800 million in digital revenues by 2020 (Thompson, cited in Nicolaou: 2016). But it is vital to understand which formats work better than others, and what the audience is interested in, in order to take interactive video content further and to manage a sustainable business model.

And continuation to optimise consumer marketing and the configuration of pay models is needed for development (Thompson, cited in Lauchlan: 2016).

“What’s been very pleasing is that we’ve seen our various measures of engaged users, the number of people who are coming…the amount of time spent, the breadth of content that people are looking at, is also ticking up. So we believe because both of the underlying quality of our journalism but also improvements to our digital products that we’re seeing a rising tide of engagement with The New York Times.”

This shows that there is worth in developing production tools in-house to create interactive content – whether it be virtual reality or other interactive. I believe that if the journalist has access to, or is able to produce, an editing tool that writes the code, then the use of interactive video as a storytelling format would add value to the journalism content. However the process of writing the code from scratch makes the overall production inefficient.

4. PERSONAL PROFESSIONAL DEVELOPMENT

Before beginning this project, my level of knowledge on producing video was limited to flat formats. I had not yet explored online code in a multimedia context; my pre-existing knowledge on basic HTML and CSS were partially transferrable into Javascript for the self-coded video, however, the gaps in my knowledge have prevented me from producing a video to a standard I originally intended.

While this knowledge is not enough to allow me to explore coded content professionally, knowing these gaps exist and where they lie provides me a starting point in which to develop further. And had I spent more time learning the code from the beginning, the artefact I produced may have been finished to a standard I would have preferred. But the decision to shift my focus away from the underdeveloped format to other parts of my project demonstrated both editorial and practical judgement, and having a need to consider the other formats.

Up until this project, I had also not experimented with virtual reality video. I enjoyed watching the episode of BBC Click in the 360º format, but never been in a position to produce such video myself. What I already knew and understood about flat video production was mostly transferrable – the significant difference being the framing of the shot, and camera placement, as mentioned in the previous chapter.

This demonstrates a need to continue the experimentation with virtual reality video. As I stated earlier, it would be beneficial to understand how this format can be used in different news contexts and different environments, not just for creating engaging content but also to establish monetisation opportunities and revenue sources for the publisher.

To conclude, I think the project I undertook was too broad to investigate fully within the timeframe. I should have narrowed down to one specific method of interaction – and had I had done so, would have chosen the virtual reality format due to the newness of the technology and way it is physically engaging with its audience. This format, which works well for both video and still image, has so much potential through taking the audience to a location or event to not just watch, but to experience. For future research and practical projects, I will know to research the broader field in more depth first, before settling on the subject.

Bibliography

Appendix

  • [Appendix 1] AJ+ call to action ‘share this’. Original video posted to Facebook on 27th August 2017, available to view at: https://www.facebook.com/ajplusenglish/videos/1030486467092860/
  • [Appendix 2] A diagram showing the framing within a flat video, and demonstrating the rule of thirds.
  • [Appendix 3] A diagram showing the camera position (blue), fixed viewing angle (grey) and subject placement (black) for a flat video.
  • [Appendix 4] A diagram showing the framing within a virtual reality (360º) video with rule of thirds.
  • [Appendix 5] A diagram showing the camera position (blue), surrounding viewing angle (grey) and subject placement (black) for a virtual reality (360º) video.

Image attachments