본문바로가기

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

Junyoung Ahn , Kyungdoh Kim
10.5143/JESK.2017.36.2.109 Epub 2017 April 28

0
Cited By

Abstract

Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs.

Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command.

Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures.

Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll).

Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations.

Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.



Keywords



Smart TV Natural UI/UX Interaction Gesture type Gesture style



1. Introduction

1.1 Background

Smart TV, equipped with Internet capability, is defined as a TV having better interaction ability with users than existing TVs. Smart TVs are also rapidly developed worldwide(Shin et al., 2013). Unlike existing TVs focusing on only media broadcasting, smart TVs have a system delivering various multimedia contents from network devices to users. Smart TVs also offer interactive Internet services including social networking, online games, shopping, and Web browsing (Lee et al., 2013). As such, smart TVs offer interactive multimedia experiences like smart devices, beyond TV, as an existing broadcasting device.

In a study of Cesar and Chorianopoulos (2009) on interactive TV, content control was regarded as a success factor together with content editing and content sharing. The use of a remote control can be a method to control contents. The existing remote control is a handheld input device without computing capability (Wang et al., 2011). However, as smart TVs emerged and remote controls became much more complicated than existing remote controls, a new interaction type, differentiated from the existing remote control type, was demanded. Therefore, many studies on smart TV-applicable, NUI-based interaction were carried out (Lee and Lee, 2013). Gesture interaction is the remote control-type next generation interface, and has an advantage for its natural and direct interaction using the hand (Shin and Choe, 2011; Baudel and Beaudouin-Lafon, 1993). Numerous manufacturers, such as Samsung, make efforts to implement direct and natural experiences to smart TV (Kim, 2014; Lee and Lee, 2013). Therefore, a natural interaction system including gesture remote control needs to be expanded to smart TVs, and researches on such a field should be steadily conducted.

Most previous studies on gestures were focused on recognition technology, and users do not worry about what gesture needs to be made for each operation command. Vatavu (2012) studied TV gesture control type that is defined by user him/herself. TV control type needs to be used so that any users can use by defining representative gestures for commands, rather than a type in which a user sets commands. As for gesture, a novice user should learn easily (ease of learning), and its use should be easy with a small recognition load of the user (ease of use) (Krum et al., 2002). This can be different from existing remote control by which a user can just press buttons corresponding to each control mode. For this reason, a proper number of commands need to be selected so that a user can easily learn, remember, and use the commands for gesture interaction. Each gesture command needs to be looked into about what style of gesture is suitable. There is a need to check what gesture style users prefer.

1.2 Gesture styles

Karam and Schraefel (2005) classified gesture styles in HCI based on gesture-associated previous studies as follows: (1) Deictic gesture, (2) Manipulative gesture, (3) Semaphoric gesture, (4) Gesticulation, and (5) Language gesture. The deictic gesture is an indicative gesture style pointing out spatial location. Such a gesture is used in virtual reality, as well as in the desktop environment. The manipulative gesture is based on actual existing manipulation. This is similar to making a gesture pushing upward like adjusting a control bar in the control command for volume up of a device. The semaphoric gesture is a sort of semaphore, and is defined as using a static or dynamic gestures' stylized dictionary (Quek et al., 2002). It is like making a static gesture connecting the thumb and index finger to express an "okay" symbol. The gesticulation means a gesture made generally during the talk, and it is expressed as speech and gesture interface, which has multimodal nature. This is also defined as "Coverbal gesture" (Bolt and Herranz, 1992; Kettebekov, 2004). Lastly, the language gesture is sign language recognition, and means the gesture expressed by drawing a letter's information. For example, it is like making a gesture of drawing "S" with the hand to express "stop" (Kim, 2014). This is different from the gesticulation style expressing a specific symbol which is stored in the recognition system and is based on language factors.

In conclusion, most previous studies are focused on recognition technology, and researches on which commands and gestures are required for users are inadequate, although gesture interaction is needed as a remote control type of smart TV. In this context, the purpose of this study is to find commands and gestures suitable for and preferred by users. An experiment is carried out targeting users, the control commands required for smart TV are extracted, and the corresponding gestures are selected. To this end, we looked at what gesture styles users use and confirmed what gestures they prefer among those styles. Through the results, this study was carried out with a process selecting smart TV control commands and gestures.

2. Gesture Types and Command Selection

The experiment and analysis were conducted by referring to a study of Choi et al. (2012) and a gesture study method of Park and Han (2013). We carried out with a process of finding gestures based on the most suitable gesture factors and styles from control commands by checking gesture styles within the suitable gesture factors. Gesture types were constituted for grouping based on gesture types and in terms of preferred gestures in the analysis process before the experiment was carried out. Control commands used for remote control in a smart TV were also collected.

2.1 Gesture types

Detailed gesture types were constituted for smart TV by referring to a study on gestures of Park and Han (2013). The gesture types used in existing studies are touch screen-based types in the 2D environment, comprised of x axis and y axis. This study revised as the gestures applicable to 3D space by adding z axis. Table 1 shows the revised gesture types. Gesture types consist of "tap", "pose", and "path". These are divided into simple/pattern, static/dynamic, and staying/moving. Existing studies grouped gestures according to type. For example, users' tapping once with one finger and users' tapping once with two fingers were gathered in the same group, and reflected it to agreement level. This study also carried out grouping based on gesture types like existing studies.

Types

Definitions

Tap

Simple

Taps on the Z-axis direction only once (e.g., single tap)

 

Pattern

Taps on the Z-axis direction using rhythm (e.g., double tap, triple tap)

Pose

Static

Hand posture does not change during the input

 

Dynamic

Hand posture changes during the input (e.g., rotate, pinch)

Path

Staying

Hand stays in one location

 

Moving

Hand moves while drawing specific trajectories (e.g., drag, flick)

Table 1. Gesture types for smart TV

2.2 Command selection for gestures

Commands by which smart TV's gesture interactions are possible were considered. The selection of control commands targeted LG smart TV (model: 47LM9600), and they were selected by referring to the commands selected from existing smart TV-related studies (Lee et al., 2011; Park and Han, 2013; Vatavu, 2012; Vatavu and Zaiti, 2014). As a result, 39 commands were collected within five applications (TV, photo, video, music, and Web browser) (Table 2).

Among the collected commands, the commands using duplicates for various applications were grouped. For channel which is used for home (basic application), a command for changing TV channel is employed, but grouping is possible with other commands having a similar nature in other applications. Concerning the photo application, it is a command to pass a photo, and is a command to play previous and succeeding contents in the case of video and music, and a command to open the previous and following pages in the case of Web browser. These were grouped with succeeding and previous commands. In this manner, we revised commands, and 18 commands were used in the experiment (Table 3).

Applications

Commands

Applications

Commands

Applications

Commands

Home

Home

 

Pan

Web browser

Home

 

Channel

Video

Home

 

Next/Previous

 

Volume

 

Next/Previous

 

Play/Stop

 

Mute

 

Play/Stop

 

Volume

 

Source

 

Volume

 

Mute

Photo

Home

 

Mute

 

Source

 

Next/Previous

 

Source

 

Zoom

 

Play/Stop

Music

Home

 

Pan

 

Volume

 

Next/Previous

 

Scroll

 

Mute

 

Play/Stop

 

Refresh

 

Source

 

Volume

 

Bookmark

 

Rotate

 

Mute

 

History

 

Zoom

 

Source

 

Toggle

Table 2. All smart TV commands with the five applications

Commands

Definitions

Home

Go to your home

Next

Go to the next channel / content / page

Previous

Go to the previous channel / content / page

Volume up

Turn the volume up

Volume down

Turn the volume down

Mute

Mute the volume

Source

Select external input source

Play

Play the content

Stop

Stop playing the content

Zoom in

Zoom in the image

Zoom out

Zoom out the image

Pan

Pan image displayed at zoom

Rotate

Rotate the image

Scroll

Scroll the page

Refresh

Refresh the current Web page

Bookmark

Open the bookmark list

History

Open the history list

Toggle

Toggle between windows

Table 3. 18 Gesture commands for smart TV
3. Method

3.1 Apparatus

A 47-inch (diagonal: 1920*1080 resolution) LG smart TV (Model: 47LM9600) was used in this study, and the experiment and analysis were carried out in the Web OS environment, which is LG smart TV's operating system, in order to look at commands required for smart TV. To videotape the process of participants' performing tasks, a Nikon D5300 model camera was used.

3.2 Metric

Concerned with commands, a study of Park and Han (2013) measured the consistency of gestures made by users through agreement level (Wobbrock et al., 2005). The study of Park and Han (2013) grouped the gesture factors (e.g., posture, location, touch, pose, path, and device) made by users per command and applied them. This study regarded the cases in which the numbers of gesture fingers were different even as one group. The agreement level equation is shown in Equation 1:

                          (1)

 means the agreement level on ith command, and  means the number of gestures made by users.  means the number of gestures within the jth group among the gestures made by users at the ith command. Park and Han (2013) reported a conclusion that the agreement levels of commands are suitable for mobile Web browser's gestures in the order of the highest agreement level. This is judged to be good for the selection of commands for which gestures are easy since the consistency level of gestures preferred by users in the commands can be checked. In the previous studies, the group with the highest  value was regarded as a top gesture. In other words, the most consistent gesture that users can come across with and express a command for is most suitable. In this way, the commands for smart TV and the most suitable gestures for the commands are found using the agreement level. The criteria for gesture grouping are needed in the process of producing agreement levels. In existing studies, grouping was conducted on the basis of gesture types suitable for smartphones. Likewise, the commands required for the current smart TVs are 2D-based. Previous studies reported that negative effects are brought about if 2D metaphor is used for control within the 3D environment because there is control difference between 2D and 3D (Liang and Green, 1994). They also reported that better performance can be shown in the case of a dimensionality match between task and input levels (Zhai and Milgram, 1998). Therefore, revision to 3D-based gestures suitable for smart TV reflecting z axis in the 2D-based (smartphone) gesture types in the existing studies is required.

3.3 Participants

This study targeted students in their 20s and 30s who most actively handle smart devices in which the use of gestures is possible, such as smartphone and tablet PC including smart TV. The participants were 20 undergraduate and postgraduate students at Hongik University. They consisted of 14 males and 6 females, and gender ratio was not considered. The mean age of the participants was 26.5.

3.4 Experimental design

The experiment was carried out in independent space within a laboratory, and was designed for all the participants to perform 18 command tasks with the within-subject design. The tasks were conducted by presenting commands in the order shown in Table 3, and the experiment was conducted in the same order for all the participants.

The questionnaire carried out prior to the experiment consisted of a 7-point Likert scale (1: not very so, 7: very so). Questions on personal information were asked first, and the participants answered such questions on smart TV use experience, whether gestures are used for smart devices, and their most favorite gesture.

3.5 Procedure

Before the experiment, questions on personal information and gestures were asked to the participants. The explanations on smart TV functions and applications were presented, as well as 18 gesture commands, 6 gesture types, and 5 gesture styles. Also, enough time to operate five applications for smart TV was offered to the participants. The participants were to select their preferred gesture type and style during the task performance by presenting the gesture type and style lists explained above prior to task performance. The participants could use one hand and both hands upon making gestures, and no time limit was offered. As with existing studies, they were instructed to pass, when they did not come across a gesture for the command concerned. The experiment was carried out in the mode of "think-aloud protocol". The reasons for the gesture concerned after making it according to each command, and opinions on commands were asked. All the task-performing processes of the participants were videotaped for an analysis.

4. Results

4.1 Questionnaire

This study analyzed the preliminary questionnaire prior to the experiment. According to the personal information questionnaire survey result, 10 of the 20 participants did not have a smart TV or had no experience using it as half of the respondents had no experience using interactive functions except broadcasting functions on TV. All the participants answered with 4 or higher points on whether they use 2D gestures for smart TV (mean = 5.9, S.D. = 1.04). Most participants answered that they use a gesture moving to previous/next page in the application including Web browser in a smart phone, and that they use a rotation gesture upon editing a photo.

4.2 Gesture types

This study conducted grouping based on gesture type per command to draw agreement levels (Table 4). Figure 1 shows the calculated agreement level of each command.

Commands

Tap

Pose

Path

N

Simple

Pattern

Static

Dynamic

Staying

Moving

Home

2

 

2

 

 

13

17

Next

1

 

3

 

 

16

20

Previous

1

 

3

 

 

16

20

Volume up

 

 

2

 

 

18

20

Volume down

 

 

2

 

 

18

20

Mute

1

 

13

 

 

5

19

Source

 

 

2

 

 

10

12

Play

13

 

4

 

 

3

20

Stop

11

 

6

 

 

3

20

Zoom in

 

 

 

10

 

10

20

Zoom out

 

 

 

10

 

10

20

Pan

 

 

 

 

 

17

17

Rotate

 

 

1

 

 

19

20

Scroll

 

 

 

 

 

20

20

Refresh

 

1

 

1

 

13

15

Bookmark

 

 

2

 

 

14

16

History

 

 

1

1

 

9

11

Toggle

 

1

1

1

 

11

14

Table 4. The groups based on gesture types
Figure 1. Agreement levels of the gesture types about 18 commands sorted in descending order

The participants preferred path-moving type gestures regarding 18 commands. The agreement level of pan and scroll commands was 1.00, showing the highest agreement level among 18 commands. The reason is that all the participants were grouped into one group by performing path-moving type gestures. This means that users prefer gestures moving using the hand with regard to the two commands. Meanwhile, the agreement level of stop command showed the lowest value at 0.42. The gestures made by the participants on the stop command were all in three groups, and 11, 6, and 3 gestures were made per group. Lower agreement level was shown since less difference between groups existed relatively, compared to other commands. This means that the gesture types reminded of or preferred by the participants were not consistent. Concerning volume up and volume down commands, their agreement levels were the same, and the constituted groups were the same as well. The same results were shown on such commands as next / previous and zoom in / zoom out. They are the commands having opposite nature of command, and it seems that the participants made gestures with different directivities, despite the gestures belonging to same type.

4.3 Gesture styles

Like gesture types, this study checked agreement levels to identify users' consistency on gesture styles (Table 5 and Figure 2).

Commands

Deictic

Manipulative

Semaphoric

Gesticulation

Language

N

Home

 

2

10

1

4

17

Next

3

17

 

 

 

20

Previous

3

17

 

 

 

20

Volume up

2

18

 

 

 

20

Volume down

2

18

 

 

 

20

Mute

 

 

14

2

3

19

Source

1

2

3

1

5

12

Play

2

14

3

 

1

20

Stop

 

12

7

 

1

20

Zoom in

 

19

1

 

 

20

Zoom out

 

19

1

 

 

20

Pan

 

17

 

 

 

17

Rotate

1

19

 

 

 

20

Scroll

 

20

 

 

 

20

Refresh

2

1

9

 

3

15

Bookmark

4

 

10

1

1

16

History

2

1

2

1

5

11

Toggle

4

5

4

 

1

14

Table 5. Gesture styles with 18 commands for smart TV
Figure 2. Agreement levels of gesture styles about 18 commands sorted in descending order

Upon looking at gesture styles, the participants preferred manipulative gestures in the following commands: next, previous, volume up, volume down, play, stop, zoom in, zoom out, pan, rotate, and scroll. The participants making such gestures said that those were based on experience using the gestures in a smartphone. They also replied that they came across the gestures directly looking at commands, in addition to smartphone. With regard to volume commands, the smart TV used for the experiment showed an effect of controlling clockwise and counterclockwise. However, only three participants made the corresponding gestures, and 11 participants came across the gestures manipulating up and down. On the other hand, control to left and right directions seems to be in place as their mental model, based on the experience passing to the previous or next content through another device. In conclusion, users seem to prefer manipulative gestures, based on the experience concerned with the commands associated with directivity.

As for mute command, a consistent gesture style was shown with semaphoric gestures. The participants used quiet gestures in line with the meaning of "mute" and the icon of "mute" or "x" sign meaning prohibition as semaphoric gesture. Concerning the remaining 7 commands, gesture styles were not consistent, and the participants preferred different gesture styles. Regarding "home" command, the participants preferred the language gesture, drawing the "H" of Home most (4 participants). Concerning source and history commands, the language gestures drawing "s" (5 participants) and "H" (5 participants) were relatively preferred much. The participants were not familiar with representing the commands with gestures, and did not directly come across them. Therefore, they said they preferred the language gesture style. However, the semaphoric gesture style was preferred most regarding the "refresh" and "bookmark" commands. As for "refresh", six participants came across an icon shaped like an object rotating clockwise, based on their experience of using smart TV's Web browser and PC's Web browser, making it as a gesture. Likewise, 7 participants answered that they came across Web browser's star-shaped icon and made it a gesture. Conclusively, users prefer the expression of language as gestures concerning commands if they do not have an experience using control commands with gestures. When the participants came across a familiar icon or symbol, they preferred semaphoric gestures. Lastly, concerned with the "toggle" command, they preferred manipulative and deictic gesture styles. Five participants preferred the manipulative style swiping palm to left and right directions, and four participants preferred deictic style, selecting with mouse pointing. However, the preferred two styles' difference is minimal, and it was unreasonable to define the gesture styles; thus, this study excluded them from the smart TV commands.

5. Discussion

The experiment results imply the following: the gestures preferred by the users were based on the use experiences from other smart devices. The participants preferred using the gestures, used in smartphones, familiar to them in smart TV as well, and the gestures were generally path-moving type and manipulative style. When the participants came across icons or symbols symbolizing the commands concerned, they preferred semaphoric gestures. Concerning the commands awkward to manipulate with gestures, they showed a trend manipulating with language gestures. As for such commands, the need for making gestures is low to the users, and existing button control may be more positive.

This study selected the gestures of commands by arranging the gesture types and styles identified above. The arrangement process was carried out with numeric values considering the consistency of type and style by multiplying two agreement levels. Nine final commands were adopted on the basis of magical number presented by Miller (1956) as to final number of commands. Table 6 shows the gesture types and styles of smart TV suitable for gesture interactions. Also, Figure 3 shows the examples of gestures in line with preferred gesture types and styles.

Commands

Types

Styles

Gesture examples

Pan

Path-Moving

Manipulative

Moving grabbing with the hand

Scroll

Path-Moving

Manipulative

Passing with the sliding palm

Rotate

Path-Moving

Manipulative

Rotating with the palm

Volume up

Path-Moving

Manipulative

Pushing upward with a finger

Volume down

Path-Moving

Manipulative

Pushing downward with a finger

Next

Path-Moving

Manipulative

Passing to the left with a finger

Previous

Path-Moving

Manipulative

Passing to the right with a finger

Zoom in

Path-Moving

Manipulative

Spreading with both hands or fingers

 

Pose-Dynamic

 

 

Zoom out

Path-Moving

Manipulative

Pinching with both hands or fingers

 

Pose-Dynamic

 

 

Table 6. The Identified set of nine gestures with their types and styles for smart TV
Figure 3. Nine gesture examples and commands
6. Conclusion

This study aims to draw commands for remote control of gestures in a smart TV and to define the gestures. We used gesture styles in addition to gesture types used for existing studies for gesture definition. With such a method, what gestures are desirable regarding smart TV's commands can be identified in reflection of the gestures preferred by users, beyond the agreement level method in previous studies. The commands and gestures required for smart TV control can be checked in this study, and each user's tendency by gesture type and style can be identified. Concerning gesture types, most users preferred gestures using the hand, and preferred manipulative gesture styles based on experience actually operating in the commands with directivity. Among other commands, the participants did not easily come across gestures in the case of unfamiliar commands for which they did not have the experience of using them, and they preferred the language gesture style. However, when the participants came across familiar symbols like icons in terms of commands, despite no experience of use, they preferred to define semaphoric signals with semaphoric gesture style. Based on the results, this study selected nine commands used for smart TV, and presented the preferred gesture types and styles. Through the findings in this study, the participants showed the most consistency in the path-moving type, moving the hand based on actual control that they experienced or knew and in the manipulative gesture style.

Some limitations exist in this study. Although the sample size of this study is 20 people, more samples will be needed for further study for better generalization of proposed results. More clear results can be revealed, if measurement indicators such as gesture making time and error rate are used by conducting a control experiment in which gestures are actually applied. Even though users' thought on commands was confirmed through interviews during the experiment, better results are expected in defining gesture styles if users' mental model-ascertaining procedure is applied to the study.

Many studies on gestures as natural UI/UX will be carried out, and many gestures can be applied to devices and services. In this regard, more research on gestures needs to be actively conducted. This study aims to confirm the gestures used for smart TV. However, such study method and results can be applied as a more developed mode to gesture study in the 3D environment, such as VR, as well as smart TV. In addition, the method used in this study should be utilized in various domains.



References


1. Baudel, T. and Beaudouin-Lafon, M., Charade: remote control of objects using free-hand gestures, Communications of the ACM, 36(7), 28-35, 1993.
Crossref  Google Scholar 

2. Bolt, R.A. and Herranz, E., Two-handed gesture in multi-modal natural dialog, In Proceedings of the 5th annual ACM symposium on User interface software and technology, 7-14, ACM, 1992.
Crossref  Google Scholar 

3. Cesar, P. and Chorianopoulos, K., The evolution of TV systems, content, and users toward interactivity, Foundations and Trends in Human-Computer Interaction, 2(4), 279-373, 2009.
Crossref  Google Scholar 

4. Choi, E., Kwon, S., Lee, D., Lee, H. and Chung, M.K., Can user-derived gesture be considered as the best gesture for a command?: Focusing on the commands for smart home system, In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56(1), 1253-1257, 2012.
Crossref  Google Scholar 

5. Karam, M. and Schraefel, M.C., A taxonomy of gestures in human computer interactions, 2005.
Crossref  Google Scholar 

6. Kettebekov, S., Exploiting prosodic structuring of coverbal gesticulation, In Proceedings of the 6th international conference on Multimodal interfaces, ACM, 105-112, 2004.
Crossref  Google Scholar 

7. Kim, H.J., A study on plan for top-down gestures design through gesture interaction case analysis, Journal of Digital Design, 14(2), 439-449, 2014.
Crossref 

8. Krum, D.M., Omoteso, O., Ribarsky, W., Starner, T. and Hodges, L.F., Speech and gesture multimodal control of a whole Earth 3D visualization environment, 2002.
Crossref  Google Scholar 

9. Lee, H.R. and Lee, W.H., A Study on user experience design for efficient control of Smart TV, Journal of the Korea Society of Computer and Information, 18(1), 43-53, 2013.
Crossref  Google Scholar 

10. Lee, S.H., Sohn, M.K., Kim, D.J., Kim, B. and Kim, H., Smart tv interaction system using face and hand gesture recognition, In Consumer Electronics (ICCE), 2013 IEEE International Conference, 173-174, 2013.
Crossref  Google Scholar 

11. Lee, S.S., Maeng, S., Kim, D., Lee, K.P., Lee, W., Kim, S. and Jung, S., FlexRemote: Exploring the effectiveness of deformable user interface as an input device for TV, In International Conference on Human-Computer Interaction, Springer Berlin Heidelberg, 62-65, 2011.
Crossref 

12. Liang, J. and Green, M., JDCAD: A highly interactive 3D modeling system, Computers & Graphics, 18(4), 499-506, 1994.
Crossref  Google Scholar 

13. Miller, G.A., The magical number seven, plus or minus two: some limits on our capacity for processing information, Psychological Review, 63(2), 81, 1956.
Crossref  Google Scholar 

14. Park, W. and Han, S.H., Intuitive multi-touch gestures for mobile web browsers, Interacting with Computers, 25(5), 335-350, 2013.
Crossref  Google Scholar 

15. Quek, F., McNeill, D., Bryll, R., Duncan, S., Ma, X.-F., Kirbas, C., McCullough, K.E. and Ansari, R., Multimodal human discourse: gesture and speech, ACM Transactions on Computer-Human Interaction (TOCHI), 9(3), 171-193, 2002.
Crossref  Google Scholar 

16. Shin, D.H., Hwang, Y. and Choo, H., Smart TV: are they really smart in interacting with people? Understanding the interactivity of Korean Smart TV, Behaviour & Information Technology, 32(2), 156-172, 2013.
Crossref  Google Scholar 

17. Shin, Y.K. and Choe, J.H., Remote Control Interaction for Individual Environment of Smart TV, Journal of the Korea Contents Association, 11(11), 70-78, 2011.
Crossref  Google Scholar 

18. Vatavu, R.D., User-defined gestures for free-hand TV control, In Proceedings of the 10th European conference on Interactive tv and video, ACM, 45-48, 2012.
Crossref  Google Scholar 

19. Vatavu, R.D. and Zaiti, I.A., Leap gestures for TV: insights from an elicitation study, In Proceedings of the 2014 ACM international conference on Interactive experiences for TV and online video, ACM, 131-138, 2014.
Crossref  Google Scholar 

20. Wang, S.C., Chung, T.C. and Yan, K.Q., A new territory of multi-user variable remote control for interactive TV, Multimedia Tools and Applications, 51(3), 1013-1034, 2011.
Crossref  Google Scholar 

21. Wobbrock, J.O., Aung, H.H., Rothrock, B. and Myers, B.A., Maximizing the guessability of symbolic input, In CHI'05 extended abstracts on Human Factors in Computing Systems, ACM, 1869-1872, 2005.
Crossref  Google Scholar 

22. Zhai, S. and Milgram, P., Quantifying coordination in multiple DOF movement and its application to evaluating 6 DOF input devices, In Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, 320-327, 1998.
Crossref  Google Scholar 

PIDS App ServiceClick here!

Download this article