Publications


VIA Group - LTCI - Télécom ParisTech




        Search:      Year:

2018
[31] MobiLimb: Augmenting Mobile Devices with a Robotic Limb.
M. Teyssier, G. Bailly, C. Pelachaud, E. Lecolinet.
In UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology, ACM (2018). toappear.
bibcite
@inproceedings{MT:UIST-18,
 author = M. {Teyssier} and G. {Bailly} and C. {Pelachaud} and E. {Lecolinet},
 booktitle = UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology,
 month = oct,
 publisher = ACM,
 title = MobiLimb: Augmenting Mobile Devices with a Robotic Limb,
 year = 2018,
}
keywords
Mobile device, Actuated devices, Robotics, Mobile Augmentation

 
[30] Self-Reflection and Personal Physicalization Construction.
A. Thudt, U. Hinrichs, S. Huron, S. Carpendale.
In CHI'18: Conference on Human Factors in Computing Systems, ACM (2018).
doi bibcite
@inproceedings{HURON:CHI-18,
 author = A. {Thudt} and U. {Hinrichs} and S. {Huron} and S. {Carpendale},
 booktitle = CHI'18: Conference on Human Factors in Computing Systems,
 month = apr,
 publisher = ACM,
 title = Self-Reflection and Personal Physicalization Construction,
 year = 2018,
}
keywords
Self-Reflection; Constructive Visualization; Personal Data
abstract
Self-reflection is a central goal of personal informatics systems, and constructing visualizations from physical tokens has been found to help people reflect on data. However, so far, constructive physicalization has only been studied in lab environments with provided datasets. Our qualitative study investigates the construction of personal physicalizations in people's domestic environments over 2-4 weeks. It contributes an understanding of (1) the process of creating personal physicalizations, (2) the types of personal insights facilitated, (3) the integration of selfreflection in the physicalization process, and (4) its benefits and challenges for self-reflection. We found that in constructive personal physicalization, data collection, construction and self-reflections are deeply intertwined. This extends previous models of visualization creation and data-driven self-reflection. We outline how benefits such as reflection through manual construction, personalization, and presence in everyday life can be transferred to a wider set of digital and physical systems.

 
[29] Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures.
B. Fruchard, E. Lecolinet, O. Chapuis.
In International Conference on Advanced Visual Interfaces, AVI 2018, (2018). p. 9.
pdf bibcite
@inproceedings{BL:AVI-18,
 address = Grosseto, Italie,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = International Conference on Advanced Visual Interfaces, AVI 2018,
 month = jun,
 pages = p. 9,
 title = Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures,
 year = 2018,
}
keywords
Semantic aids; Memorization; Command selection; On-body interaction; Marking menus; Virtual reality
abstract
Previous studies have shown that spatial memory and semantic aids can help users learn and remember gestural commands. Using the body as a support to combine both dimensions has therefore been proposed, but no formal evaluations have yet been reported. In this paper, we compare, with or without semantic aids, a new on-body interaction technique (BodyLoci) to mid-air Marking menus in a virtual reality context. We consider three levels of semantic aids: no aid, story-making, and story-making with background images. Our results show important improvement when story-making is used, especially for Marking menus (28.5% better retention). Both techniques performed similarly without semantic aids, but Marking menus outperformed BodyLoci when using them (17.3% better retention). While our study does not show a benefit in using body support, it suggests that inducing users to leverage simple learning techniques, such as story-making, can substantially improve recall, and thus make it easier to master gestural techniques. We also analyze the strategies used by the participants for creating mnemonics to provide guidelines for future work.

 
[28] WebLinux: a scalable in-browser and client-side Linux and IDE.
R. Sharrock, L. Angrave, E. Hamonic.
In ACM Learning at scale, (2018).
bibcite
@inproceedings{SR:LAS-18,
 address = Londres, Grande Bretagne,
 author = R. {Sharrock} and L. {Angrave} and E. {Hamonic},
 booktitle = ACM Learning at scale,
 month = jun,
 title = WebLinux: a scalable in-browser and client-side Linux and IDE,
 year = 2018,
}

 
[27] Teaching Linux and C programming in MOOCs with a scalable in-browser and 100% client-side Linux IDE.
R. Sharrock, E. Hamonic, P. Taylor, M. Goudzwaard.
In OpenedX, (2018).
bibcite
@inproceedings{SR:OEDX-18,
 address = Montreal, Canada,
 author = R. {Sharrock} and E. {Hamonic} and P. {Taylor} and M. {Goudzwaard},
 booktitle = OpenedX,
 month = may,
 title = Teaching Linux and C programming in MOOCs with a scalable in-browser and 100\% client-side Linux IDE,
 year = 2018,
}

 
[26] Codecast: an interactive tutorial tool to teach and learn the C programming language effectively in a MOOC.
R. Sharrock, P. Taylor, E. Hamonic, M. Goudzwaard.
In OpenedX, (2018).
bibcite
@inproceedings{SR:OEDX2-18,
 address = Montreal, Canada,
 author = R. {Sharrock} and P. {Taylor} and E. {Hamonic} and M. {Goudzwaard},
 booktitle = OpenedX,
 month = may,
 title = Codecast: an interactive tutorial tool to teach and learn the C programming language effectively in a MOOC,
 year = 2018,
}

 
[25] Making Sense of Data Workers' Sense Making Practices.
J. Liu, N. Boukhelifa, J. Eagan.
In Extended Abstracts of the 2018 CHI Conference, (2018).
pdf bibcite
@inproceedings{liu:hal-01826714,
 address = Montr{\'e}al, Qu{\'e}bec, Canada,
 author = J. {Liu} and N. {Boukhelifa} and J. {Eagan},
 booktitle = Extended Abstracts of the 2018 CHI Conference,
 month = apr,
 title = Making Sense of Data Workers' Sense Making Practices,
 year = 2018,
}
keywords
Data science;sensemaking;visualisation;visual analytics
abstract
Data workers are non-professional data scientists who engage in data analysis activities as part of their daily work. In this position paper, we draw on our past experience in studying their data analysis processes and workflows, and the tools we built to support sensemaking. We describe our background as computer scientists and our multidisciplinary approach. Finally, we conclude with open questions and research directions, and argue for more research into the challenges faced by data workers.

 
[24] Large scale learning tools to teach C and Linux.
R. Sharrock.
In Berkley education technologies seminar, (2018).
bibcite
@inproceedings{SR:BK-18,
 author = R. {Sharrock},
 booktitle = Berkley education technologies seminar,
 month = mar,
 title = Large scale learning tools to teach C and Linux,
 year = 2018,
}

 
[23] Coding Tutorials for any Programming Language or Interactive Tutorials for C and Arduino.
R. Sharrock, B. Gaultier, P. Taylor, M. Goudzwaard, M. Hiron, E. Hamonic.
In ACM SIGCSE, (2018).
bibcite
@inproceedings{SR:SIGCSE-18,
 address = Baltimore, USA,
 author = R. {Sharrock} and B. {Gaultier} and P. {Taylor} and M. {Goudzwaard} and M. {Hiron} and E. {Hamonic},
 booktitle = ACM SIGCSE,
 month = feb,
 title = Coding Tutorials for any Programming Language or Interactive Tutorials for C and Arduino,
 year = 2018,
}

 
[22] Weblinux: run linux 100% client-side in the browser.
R. Sharrock.
In HarvardX seminar, (2018).
bibcite
@inproceedings{SR:HA-18,
 address = Harvard university, USA,
 author = R. {Sharrock},
 booktitle = HarvardX seminar,
 month = feb,
 title = Weblinux: run linux 100\% client-side in the browser,
 year = 2018,
}

 
[21] How to build leverage, maintain and assess motivation in online learning environments.
R. Sharrock.
In Stanford Digital Learning Initiative, (2018).
bibcite
@inproceedings{SR:SD-18,
 address = Stanford, California, USA,
 author = R. {Sharrock},
 booktitle = Stanford Digital Learning Initiative,
 month = feb,
 title = How to build leverage, maintain and assess motivation in online learning environments,
 year = 2018,
}

 
2017
[20] Visual Menu Techniques.
G. Bailly, E. Lecolinet, L. Nigay.
ACM Computer Surveys, 49, 4, (2017). 41 pages.
doi hal pdf bibcite
@article{BLN:ACM-CSUR-17,
 author = G. {Bailly} and E. {Lecolinet} and L. {Nigay},
 journal = ACM Computer Surveys,
 month = jan,
 number = 4,
 pages = 41 pages,
 title = Visual Menu Techniques,
 volume = 49,
 year = 2017,
 hal = hal-01258368,
 image = BLN-ACM-CSUR-17.png,
}
keywords
Menu techniques, command selection, shortcuts
abstract
Menus are used for exploring and selecting commands in interactive applications. They are widespread in current systems and used by a large variety of users. As a consequence, they have motivated many studies in Human-Computer Interaction (HCI). Facing the large variety of menus, it is difficult to have a clear understanding of the design possibilities and to ascertain their similarities and differences. In this article, we address a main challenge of menu design: the need to characterize the design space of menus. To do this, we propose a taxonomy of menu properties that structures existing work on visual menus. As properties have an impact on the performance of the menu, we start by refining performance through a list of quality criteria and by reviewing existing analytical and empirical methods for quality evaluation. This taxonomy of menu properties is a step toward the elaboration of advanced predictive models of menu performance and the optimization of menus. A key point of this work is to focus both on menus and on the properties of menus, and then enable a fine-grained analysis in terms of performance.

[19] Codestrates: Literate Computing with Webstrates.
R. Rädle, M. Nouwens, K. Antonsen, J. Eagan, C. Klokmose.
In The 30th Annual ACM Symposium on User Interface Software and Technology, (2017).
doi bibcite
@inproceedings{eagan:hal-01692918,
 address = Qu{\'e}bec City, Canada,
 author = R. {R{\"a}dle} and M. {Nouwens} and K. {Antonsen} and J. {Eagan} and C. {Klokmose},
 booktitle = The 30th Annual ACM Symposium on User Interface Software and Technology,
 month = oct,
 title = Codestrates: Literate Computing with Webstrates,
 year = 2017,
}

 
[18] MarkPad: Augmenting Touchpads for Command Selection.
B. Fruchard, E. Lecolinet, O. Chapuis.
In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 5630-5642.
doi hal pdf video bibcite
@inproceedings{MP:CHI-17,
 address = Denver, Colorado, Etats-Unis,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 5630--5642,
 publisher = ACM,
 title = MarkPad: Augmenting Touchpads for Command Selection,
 year = 2017,
 hal = hal-01437093/en,
 image = MP-CHI-17.png,
 video = https://www.youtube.com/watch?v=rUGGTrYPuSM,
 software = http://brunofruchard.com/markpad.html,
}
keywords
Gestural interaction; bezel gestures; tactile feedback; spatial memory; touchpad; user-defined gestures; Marking menus
abstract
We present MarkPad, a novel interaction technique taking advantage of the touchpad. MarkPad allows creating a large number of size-dependent gestural shortcuts that can be spatially organized as desired by the user. It relies on the idea of using visual or tactile marks on the touchpad or a combination of them. Gestures start from a mark on the border and end on another mark anywhere. MarkPad does not conflict with standard interactions and provides a novice mode that acts as a rehearsal of the expert mode. A first study showed that an accuracy of 95% could be achieved for a dense configuration of tactile and/or visual marks allowing many gestures. Performance was 5% lower in a second study where the marks were only on the borders. A last study showed that borders are rarely used, even when the users are unaware of the technique. Finally, we present a working prototype and briefly report on how it was used by two users for a few months.
software
[17] CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays.
C. Liu, O. Chapuis, M. Beaudouin-Lafon, E. Lecolinet.
In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 6730-6741.
doi hal pdf video bibcite
@inproceedings{LCBL:CHI-17,
 author = C. {Liu} and O. {Chapuis} and M. {Beaudouin-Lafon} and E. {Lecolinet},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 6730--6741,
 publisher = ACM,
 title = CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays,
 year = 2017,
 hal = hal-01437091/en,
 image = LCBL-CHI-17.jpg,
 video = https://www.lri.fr/~chapuis/publications/CHI17-coreach.mp4,
}
keywords
Shared interaction, wall display, co-located collaboration
abstract
Multi-touch wall-sized displays afford collaborative exploration of large datasets and re-organization of digital content. However, standard touch interactions, such as dragging to move content, do not scale well to large surfaces and were not designed to support collaboration, such as passing an object around. This paper introduces CoReach, a set of collaborative gestures that combine input from multiple users in order to manipulate content, facilitate data exchange and support communication. We conducted an observational study to inform the design of CoReach, and a controlled study showing that it reduced physical fatigue and facilitated collaboration when compared with traditional multi-touch gestures. A final study assessed the value of also allowing input through a handheld tablet to manipulate content from a distance.

[16] VersaPen: An Adaptable, Modular and Multimodal I/O Pen.
M. Teyssier, G. Bailly, E. Lecolinet.
In CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 2155-2163.
doi hal pdf video bibcite
@inproceedings{Teyssier:VersaPen-2017,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 2155--2163,
 publisher = ACM,
 title = VersaPen: An Adaptable, Modular and Multimodal I/O Pen,
 year = 2017,
 hal = hal-01521565,
 image = VersaPen-wp-CHI17.png,
 video = https://www.youtube.com/watch?v=WhhZc67geAQ,
}
keywords
Pen input; Multimodal interaction; Modular input
abstract
While software often allows user customization, most physical devices remain mainly static. We introduce VersaPen, an adaptable, multimodal, hot-pluggable pen for expanding input capabilities. Users can create their own pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen offers multiple advantages. Allowing in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. It also enriches interaction by providing multimodal capabilities, as well as a mean to encapsulate virtual data into physical modules which can be shared by users to foster collaboration. We present various applications to demonstrate how VersaPen enables new interaction techniques.

[15] VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen.
M. Teyssier, G. Bailly, E. Lecolinet.
In CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 377-380.
doi hal pdf video bibcite
@inproceedings{teyssier:hal-01521566,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 377--380,
 publisher = ACM,
 title = VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen,
 year = 2017,
 hal = hal-01521566,
 image = VersaPen-demo-CHI17.png,
 video = https://www.youtube.com/watch?v=LYIjfUDTdbU,
}
keywords
Pen input ; Multimodal interaction
abstract
While software often allows user customization, most physical devices remain mainly static. We introduce VersaPen, an adaptable, multimodal, hot-pluggable pen for expanding input capabilities. Users can create their own pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen offers multiple advantages. Allowing in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. It also enriches interaction by providing multimodal capabilities, as well as a mean to encapsulate virtual data into physical modules which can be shared by users to foster collaboration. We present various applications to demonstrate how VersaPen enables new interaction techniques.

[14] Grab 'n' Drop: User Configurable Toolglasses.
J. Eagan.
In 16th IFIP Conference on Human-Computer Interaction (INTERACT 2017), 10515, Springer (2017). 315-334.
doi pdf bibcite
@inproceedings{eagan:hal-01693001,
 address = Mumbai, India,
 author = J. {Eagan},
 booktitle = 16th IFIP Conference on Human-Computer Interaction (INTERACT 2017),
 month = sep,
 pages = 315--334,
 publisher = Springer,
 title = Grab 'n' Drop: User Configurable Toolglasses,
 volume = 10515,
 year = 2017,
}
keywords
user interfaces;toolglasses;instrumental interaction;polymorphism
abstract
We introduce the grab 'n' drop toolglass, an extension of the toolglass bi-manual interaction technique. It enables users to create and configure their own toolglasses from existing user interfaces that were not designed for toolglasses. Users compose their own toolglass interactions at runtime from an application's user interface elements, bringing interaction closer to the objects of interest in a workspace. Through a proof-of-concept implementation for Mac OS X, we show how grab 'n' drop capabilities could be added to existing applications at the toolkit level, without modifying application source code or UI design. Finally, we evaluate the power and flexibility of this approach by applying it to a variety of applications. We further identify limitations and risks associated with this approach and propose changes to existing toolkits to foster such user-reconfigurable interaction.

 
[13] 2017.
J. Eagan.
In HM'17: Conférence francophone sur l'Interaction Homme Machine, (2017).
bibcite
@inproceedings{EAGAN-TALK2:17,
 author = J. {Eagan},
 booktitle = HM'17: Conf{\'e}rence francophone sur l'Interaction Homme Machine,
 month = sep,
 title = 2017,
 year = 2017,
}

 
[12] Revue et Perspectives du Toucher Social en IHM.
M. Teyssier, G. Bailly, E. Lecolinet, C. Pelachaud.
In IHM'17: Conférence francophone sur l'Interaction Homme Machine, ACM (2017).
doi pdf bibcite
@inproceedings{MT:IHM-17,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet} and C. {Pelachaud},
 booktitle = IHM'17: Conf{\'e}rence francophone sur l'Interaction Homme Machine,
 month = aug,
 publisher = ACM,
 title = Revue et Perspectives du Toucher Social en IHM,
 year = 2017,
}
keywords
Social Touch, haptics, tactile feedback, emotional design, non-verbal communication
abstract
Touch is one of the primary channel of non-verbal communication. It is used to convey emotions and to establish bounds between people. Its use has already been considered in HCI to interact with devices, but it has rarely been used for direct emotional communication between individuals. This article presents a literature review of social touch in human-computer interaction. Based at the cross- section of psychology, HCI, emotions and haptics research communities, we first review the role and importance of social touch for communicating emotions. We then discuss existing and emerging technologies to perform social touch and finally we present new perspectives for interfaces in HCI.

 
[11] Investigating the Design Space of Smartwatches Combining Physical Rotary Inputs.
E. Brulé, G. Bailly, M. Serrano, M. Teyssier, Th. Jacob, S. Huron.
In HM'17: Conf{\'e}rence francophone sur l'Interaction Homme Machine, ACM (2017).
bibcite
@inproceedings{BRULE:IHM-17,
 author = E. {Brul{\'e}} and G. {Bailly} and M. {Serrano} and M. {Teyssier} and Th. {Jacob} and S. {Huron},
 booktitle = HM'17: Conf\{{\textbackslash}'e\}rence francophone sur l'Interaction Homme Machine,
 month = aug,
 publisher = ACM,
 title = Investigating the Design Space of Smartwatches Combining Physical Rotary Inputs,
 year = 2017,
}
abstract
Watches benefit from a long design history. Designers and engineers have successfully built devices using rotary physical inputs such as crowns, bezels, and wheels, separately or combined. Smart watch designers have explored the use of some of these inputs for interactions. However, a systematic exploration of their combinations has yet to be done. We investigate the design space of interactions with multiple rotary inputs through a three stages exploration. (1) We build upon observations of a collection of 113 traditional or electronic watches to propose a typology of physical rotary inputs for watches. (2) We conduct two focus groups to explore combination of physical rotary inputs. (3) We then build upon the output of these focus groups to design a low fidelity prototype, and further discuss the potential and challenges of rotary inputs combinations during a third focus group

 
[10] Large scale automated assesment tools for code grading.
R. Sharrock.
In Microsoft symposium on large scale assessment, (2017).
bibcite
@inproceedings{SR:AD-17,
 address = Adealide University, Australia,
 author = R. {Sharrock},
 booktitle = Microsoft symposium on large scale assessment,
 month = dec,
 title = Large scale automated assesment tools for code grading,
 year = 2017,
}

 
[9] Démonstration de MarkPad : Augmentation du pavé tactile pour la sélection de commandes.
B. Fruchard, E. Lecolinet, O. Chapuis.
In IHM'17: 29ème conférence francophone sur l'Interaction Homme-Machine, (2017). 2.
pdf bibcite
@inproceedings{fruchard:hal-01577687,
 address = Poitiers, France,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = IHM'17: 29{\`e}me conf{\'e}rence francophone sur l'Interaction Homme-Machine,
 month = aug,
 pages = 2,
 title = D{\'e}monstration de MarkPad : Augmentation du pav{\'e} tactile pour la s{\'e}lection de commandes,
 year = 2017,
}
keywords
Interaction gestuelle; Gestes de bezel; Retour tactile; mémoire spatiale; Pavé tactile; Gestes définis par l'utilisateur; Marking menus
abstract
MarkPad est une technique prenant avantage du touchpad pour permettre la création d'un grand nombre de gestes dépendants de leur taille. Elle se base sur l'idée d'utiliser des marques visuelles ou visuo-tactiles sur le touchpad ou une combinaison des deux. Les gestes démarrent d'une marque sur le bord et finissent sur une autre n'importe où. MarkPad ne rentre pas en conflit avec le pointage et propose un mode novice qui agit comme un mode d'entraînement pour le mode expert. Nous présentons un prototype fonctionnel qui permet de spécifier des raccourcis spatialement organisés selon le souhait de l'utilisateur. Des associations entre des actions et des gestes mènent à la création de menus gestuels, permettant de les regrouper sémantiquement.

 
[8] Interaction Techniques Exploiting Memory to Facilitate Command Activation.
B. Fruchard, E. Lecolinet, O. Chapuis.
In 29ème conférence francophone sur l'Interaction Homme-Machine, (2017). 5.
pdf bibcite
@inproceedings{fruchard:hal-01577856,
 address = Poitiers, France,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = 29{\`e}me conf{\'e}rence francophone sur l'Interaction Homme-Machine,
 month = aug,
 pages = 5,
 title = Interaction Techniques Exploiting Memory to Facilitate Command Activation,
 year = 2017,
}
keywords
interaction homme-machine, mémoire spatiale, mémoire sémantique, apprentissage et mémorisation, interaction gestuelle, manipulation de données
abstract
L'objectif de cette thèse est de proposer une nouvelle catégorie de techniques interactives reposant sur des méthodes d'augmentation de la mémoire humaine afin de permettre, via des interactions gestuelles, d'accéder facilement et instantanément à un large ensemble de commandes ou de données. Ce projet comporte deux contributions : 1) améliorer la compréhension de certains phénomènes entrant en jeu dans l'apprentissage de gestes et la mémorisation de commandes; 2) proposer de nouvelles techniques d'interaction gestuelle facilitant la mémorisation en s'appuyant sur les résultats précédents et certaines connaissances issues des méthodes mnémotechniques.

 
[7] Malleable User Interface Toolkits for Cross-Surface Interaction.
J. Eagan.
In HCI.Tools: Strategies and Best Practices for Designing, Evaluating, and Sharing Technical HCI Toolkits workshop at CHI 2017, (2017).
pdf bibcite
@inproceedings{eagan:hal-01693010,
 address = Denver, United states,
 author = J. {Eagan},
 booktitle = HCI.Tools: Strategies and Best Practices for Designing, Evaluating, and Sharing Technical HCI Toolkits workshop at CHI 2017,
 month = may,
 title = Malleable User Interface Toolkits for Cross-Surface Interaction,
 year = 2017,
}
abstract
Existing user interface toolkits are based on a single user interacting with a single machine with a relatively fixed set of input devices. Today's interactive systems, however, can involve multiple users interacting with a heterogeneous set of input, computational, and output capabilities across a dynamic set of different devices. The abstractions that help programmers create interactive software for one kind of system do not necessarily scale to these new kinds of environments. New toolkits designed around these environments, however, need to be able to bridge existing software and libraries or recreate them from scratch. In this position paper, we examine these new constraints and needs. We look at three strategies for software toolkits that help to bridge existing toolkit models to these new interaction paradigms.

 
[6] Codecast: a step-by-step online execution tool to learn how to program in C.
R. Sharrock.
In MIT Learning seminar, (2017).
bibcite
@inproceedings{SR:MIT-17,
 address = Cambridge, MA, USA,
 author = R. {Sharrock},
 booktitle = MIT Learning seminar,
 month = apr,
 title = Codecast: a step-by-step online execution tool to learn how to program in C,
 year = 2017,
}

 
[5] Integration of learning tools using LTI on edX.
R. Sharrock.
In Boston University Digital Learning Seminars, (2017).
bibcite
@inproceedings{SR:BU-17,
 address = Boston, USA,
 author = R. {Sharrock},
 booktitle = Boston University Digital Learning Seminars,
 month = apr,
 title = Integration of learning tools using LTI on edX,
 year = 2017,
}

 
[4] Three tools to learn C programming online.
R. Sharrock.
In HarvardX seminar, (2017).
bibcite
@inproceedings{SR:HAR-17,
 address = Cambridge, MA, USA,
 author = R. {Sharrock},
 booktitle = HarvardX seminar,
 month = apr,
 title = Three tools to learn C programming online,
 year = 2017,
}

 
[3] Présentation de l'outil Codecast.
R. Sharrock.
In Journée Comité des Usages Mutualisés du numérique pour l'Enseignement, (2017).
bibcite
@inproceedings{SR:P-17,
 address = Universit{\'e} Paris Descartes,
 author = R. {Sharrock},
 booktitle = Journ{\'e}e Comit{\'e} des Usages Mutualis{\'e}s du num{\'e}rique pour l'Enseignement,
 month = apr,
 title = Pr{\'e}sentation de l'outil Codecast,
 year = 2017,
}

 
2016
[2] Large scale learning tools.
R. Sharrock.
In Edx Global Forum, (2016).
bibcite
@inproceedings{SR:EGF-16,
 address = Paris, France,
 author = R. {Sharrock},
 booktitle = Edx Global Forum,
 month = nov,
 title = Large scale learning tools,
 year = 2016,
}

 
2015
[1] Servers, display devices, scrolling methods and methods of generating heatmaps.
J. Robinson, M. Ribière, M. Baglioni, E. Lecolinet, J. Daigremont.
8994755, (2015).
pdf bibcite
@article{ELC:PATENT-15,
 address = USA,
 author = J. {Robinson} and M. {Ribi{\`e}re} and M. {Baglioni} and E. {Lecolinet} and J. {Daigremont},
 month = mar,
 number = 8994755,
 title = Servers, display devices, scrolling methods and methods of generating heatmaps,
 year = 2015,
}
abstract
Methods of generating heatmaps including receiving, at a first electronic device, first information associated with a first zone of a plurality of zones of a content item, determining at least one first concept related to the first information, receiving at least one target content characteristic, determining at least one second concept related to the at least one target content characteristic, and determining a first heat of the first zone based on the first and second concepts, the first heat representing a measure of similarity between the first and second concepts.

 

>>> VIA Group