Poster Abstracts

The posters that will be on display during the conference can be found below:

Tangible Digital Musical Instruments
Gareth Young  (University College Cork) 

Within this paper, a discussion of the effects of haptic feedback upon the performance of computer music with Digital Musical Instruments (DMIs) is presented. We propose to explore the application of haptic feedback by applying Human-Computer Interaction (HCI) evaluation techniques in the context of previous musical experience. It has been indicated in earlier HCI research that in the design of haptic DMIs the experiences and expectations of the musician must be considered for the creation of a tangible DMI. Therefore, several design recommendations are present to address the physical-digital divide that currently exists between users of such instruments. It is expected that in developing and testing future DMIs that follow these guidelines, bridging this divide will be achieved.
Towards understanding usage patterns for a mobile Bluetooth speaker through data logging and analysis
David Considine (University of Limerick), Aidan Kehoe (Design Lab, Logitech), Noirin Curran (Design Lab, Logitech)

Many companies attempt to measure and analyze usage of apps and products in an effort to optimize the user experience and gain insights that inform the design of the next generation of product concepts. This type of behaviour pattern tracking adds a layer of richness alongside other in-depth user research, and can inspire the creation of new hypotheses. This poster reports on a project conducted during an internship at Logitech, Cork, which explored how to log and analyze usage of a top-selling Bluetooth speaker (Ultimate Ears Boom) and the associated smartphone app. This work-in-progress project has explored which data should be collected and how/when to perform that logging. It also explored how the data could be effectively processed post-logging in order to answer a range of usage and behaviour questions posed by Logitech design and marketing teams. The proof-of-concept logging software was developed to run on the Android platform. Flurry Analytics API was selected as the logging component. The data logged included the primary music listening interactions, e.g., date, time, location; and media control interactions such as play, skip, pause, volume changes, etc. The app was also enhanced to generate simulated datasets for a range of categories of “typical users”, as defined by the marketing team. The simulation of data was important to stress and confirm the operation of the logging and analysis components of the software, prior to doing tests in the wild with real users.The poster describes the overall system architecture and the operation of the components. The poster also reports the key findings from the project with respect to logging data with Flurry Analytics API, the challenges of post processing and data analysis, and the importance of iterative simulation to validate a design prior to deployment to users in the real world.
Dialogue Machine Translation
Jinhua Du (Dublin City University), Longyue Wang (Dublin City University), Qun Liu (Dublin City University), Andy Way (Dublin City University)

Machine translation (MT) technology has been widely applied in many industry fields, such as software localization, patent translation etc. However, for scenarios such as the translation of subtitles for audiovisual content, the circulation of meeting transcripts across multiple languages, and speech-to-speech interpretation, there still exists many challenging research problems to solve. MT for these scenarios are defined as Dialogue Machine Translation (DMT). We conduct our research on DMT in three most challenging aspects: 1) automatic construction of dialogue corpus. Dialogue data is totally different from the general domain data. Besides the parallel sentences, it should also include dialogue information, e.g. speaker, time, act, location to help translation process; 2) spoken language processing for MT. The style of spoken language is significantly different from that of text language. For example, in Chinese spoken language, pronouns are often omitted. Thus, when translated to English, these omitted pronouns need to be recognized and translated; 3) developing a real-time task-oriented dialogue MT system. Based on constructed dialogue corpus and proposed DMT methodologies, we developed an online demo system for the hotel booking scenario. Agents and customers can use their own languages, such as English or Chinese to freely communicate to finish the room booking task. Through investigating the internal structure of dialogue and demo system, we finally achieve our goals: 1) we build 2 million sentence pairs of spoken language corpus. 2) The translation quality of dialogues is improved by 30%-60% on Chinese-English language pair. 3) We developed a real-time dialogue translation system for hotel booking task. We believe that our proposed approaches could help both MT researchers and industries to boost the performance of conversational MT systems.
ProphetMT: A Tree-based SMT-driven Controlled Language Authoring/Post-Editing Tools
Jinhua Du (Dublin City University), Andy Way (Dublin City University)

Authoring tool helps people to write papers/texts efficiently and correctly. However, most authoring tools only support monolingual writing which separates the writing and translation into two independent workflows. Extra effort will be made when translation needed. This poster presents ProphetMT, a tree-based SMT-driven Controlled Language (CL) authoring and post-editing tool which combines the writing and translation in one framework, i.e. the translation can be achieved when the writing is going on. ProphetMT employs the source-side rules in a translation model and provides them as auto-suggestions to users. Accordingly, one might say that users are writing in a ‘Controlled Language’ that is ‘understood’ by the computer. ProphetMT also allows users to easily attach structural information as they compose content. When a specific rule is selected, a partial translation is promptly generated on-the-fly with the help of the structural information. Our experiments conducted on English-to-Chinese show that our proposed ProphetMT system can not only better regularise an author’s writing behaviour, but also significantly improve translation fluency which is vital to reduce the post-editing time. Additionally, when the writing and translation process is over, ProphetMT can provide an effective colour scheme to further improve the productivity of post-editors by explicitly featuring the relations between the source and target rules.
Conveying Visual Art to the Visually Impaired through Haptic and Audio Feedback
Julie Daunt (University College Cork)

Visual art is a broad inclusive term, used to describe all forms of painting, design, photography and sculpture. Yet, the term is inherently exclusive, as it deems all creative outputs to be experienced purely visually, meaning those who cannot see cannot appreciate art. But is it a reasonable to assume that those who are blind might not understand visual concepts in the same manner as those with full vision? It has been proven that those who are congenitally blind organise “visual information in much the same way as sighted people.” [Pitt1998] An experiment by Kennedy [1980] confirms this theory where several VIPs were asked to draw a table. All recognised that the table would appear differently or be partially obscured depending on its position in relation to them, demonstrating knowledge of foreshortening and perspective. Another study by Zimler and Keenan [1983] investigated VIPs performance during a free-recall task which involved groups of words linked by a common modality-specific attribute. Results showed that VIPs fared nearly as well on “red” and “round” words as sighted subjects, but performed better on “loud” words. Both of these studies show that a good knowledge of colour and perspective can be developed, even when the visual means of experience is missing. With these experiment results in mind, this paper investigates if an artwork can be conveyed interactively to a VIP through the use of haptic and audio feedback. I created a web based application devised to work on a tablet device and consisting of an interactive artwork. HTML5, JavaScript, CSS and SVG are the primary technologies used to develop the application. Following results from usability tests, this paper suggests that haptic and audio feedback can help develop a VIP’s mental picture of an artwork, thus providing better access to the visual arts.
Translator-computer interaction: Exploring cognitive ergonomics in the workplace
Carlos Teixeira (Dublin City University)

Translators are renowned for spending long hours staring at computer screens and typing in their keyboards, having to switch frequently between different types of tools and technologies (such as translation memories, machine translation and terminology databases) while at the same time having to cope with tight deadlines and varying client expectations. This often leads to physical conditions such as tendinitis and eye strain, as well as to mental fatigue and stress. This poster looks at the interaction between 10 professional translators and their tools of trade in a workplace environment at a British translation agency. The study presented here is the first stage of a larger project and takes a holistic approach at how translators interact with the computer artefacts at their disposal. For Stage 1, I recorded the translators’ interactions using screen recording, keystroke logging and eye tracking during translation sessions of approximately 30 minutes. The results presented in the poster include an analysis of how the participants switch between screens in a dual-monitor configuration, how they switch between tools (from their main CAT tool to the browser, terminology management and quality assurance software) and how they fixate on different areas of the main tool. The results of Stage 1 will inform the design of a more controlled experiment in Stage 2, where we plan to test some hypotheses concerning specific tool features in order to identify how the tool ergonomics can be improved to optimise cognitive processing. This study is interdisciplinary in nature and draws from existing research in fields such as Human-Computer Interaction, Intelligent User Interfaces and Personalisation.
Natural Interaction with Guitar Audio Effects during a Musical Performance
John Sheehy (Cork Institute of Technology), Donal O’Donovan (Cork Institute of Technology)

During a musical performance the control of guitar-based audio effects is typically achieved using devices that are placed on the floor and controlled by the musician’s feet. These effect pedals are used to extend the gamut of sounds produced. Examples of performers that rely heavily on such effects are head-banging heavy-metal guitarists or grooving-funk bassists and while these movements are not directly involved in making the music they are synonymous with the sound being created. Previous approaches to human computer interaction for music applications include modifying traditional instruments, gesture controlled virtual instruments and capturing the movements of dancers to augment a performance. However, the movements of a musician performing with a traditional instrument have been underutilised. In particular, there has been no attempt to modulate the sound effects as a function of the harmonious, indirect movements of the musician’s body and instrument. Indeed, current foot-pedal approaches restrict control to a fixed location; thus, using the distance from the foot-pedal as a controlling parameter allows the musician more freedom of movement to focus on other musicians, or the audience, creating new options for interaction during a live performance. The contribution of this research is a proof-of-concept system, based on the Microsoft Kinect and digital audio effects, for interpreting the natural movements of a musician and their instrument, to control audio effects without excessive interference as the musician’s hands are free to play the instrument as normal.
Emotional Design, Technology Enhanced learning and HCI Education
Denise McEvoy (National College of Design), Dr. Benjamin R. Cowan (University College Dublin), Dr. Marcus Hanratty (National College of Art and Design)

The past decade has seen major advancements in both technology and its acceptance in educational environments. This Ph.D. research is examining the application of emotional design theory via technology-enhanced learning tools for positive student engagement in the field of Human-Computer Interaction (HCI) education. It aims to explore how emotionally designed interfaces can engage the learner on a positive level while reducing perceived difficulty and learner frustration. The current generation of learners are interacting daily with products that have excellent user experience across a multitude of platforms, fostering high expectations on the tools developed for technology enhanced learning (TEL). Research has begun to merge both Technology Enhanced Learning and Emotional Design, in an attempt to explore the positive benefits of the role emotions play on engagement in digital learning environments.
The theory of emotional design is to design the product with emotional intent. The use of emotional design in TEL is in constructing the interface in a more attractive manner by enhancing the interface and user experience to be more visually appealing, functional, usable and engaging. As a result, positive emotions elicited by the user can enhance the learning experience by the task being perceiving to be less difficult while increasing both motivation and learning outcomes.
Preliminary findings from primary research conducted with HCI educators in Ireland has found that TEL tools are being utilised in the support of HCI education due to growing student numbers, demand for 24/7 access and student engagement. The majority agreed that aesthetics was important for student’s perception of quality and continued use. It was also noted that the quality of TEL tools aesthetics was lacking or overlooked in general educational tools.
Social Media Interactive Learning Environment; a smart ageing project
Ailis Ni Chofaigh (Limerick Institute of Technology), Denise McEvoy (Limerick Institute of Technology), Dr. Seamus O’Ciardhuain (Limerick Institute of Technology)

This research is conducted as part of an MSc in Human Computer Interaction based on Smart Ageing. The main aim of this research is to help teach smart agers how to use social networking sites (SNSs) to combat digital and social isolation; engaging them through the use of emotional design and gamification theory. Smart Agers, for the purpose of this research, will be classified as users above fifty-five years of age. By utilising the artefact developed through this research, Smart Agers can learn how to use SNSs in a way that can enhance their lives, while also building their confidence with Information & Communication Technology (ICT) that will extend beyond the realm of social media. With qualitative research and user-centred design methodologies, SMILE hopes to bridge the digital divide by providing smart agers with a tool designed specifically for their ICT and learning needs. Involving the users at research, design and reflection phases also allows smart agers an opportunity to have direct input into how SMILE can best work for them. By designing a user experience specific to their requirements it aims at improving user engagement and unlocking the potential benefits that ICT holds for smart agers. Ireland has, along with many other countries, recognised the need to protect its ageing population from the harmful effects of social isolation and loneliness. Such feelings can be combated by having strong social connections, which also improve one’s overall well-being. This research will examine ways in which HCI and technology-enhanced learning can be utilised by smart agers against feelings of isolation and loneliness through online connectivity and communities. Therefore, SMILE’s main objective is to teach smart agers to use ICT to enhance their social connectivity online and strengthen their social supports and community based networks.
Automatic Affect State Detection using Fiducial Points for Facial Expression Analysis
Anas Samara (Ulster University),Leo Galway (Ulster University),Raymond Bond (Ulster University), Hui Wang (Ulster University)

Current advancements in digital technology indicate that there is an opportunity to enhance computers with automated intelligence in order to understand human feelings and emotions that may be relevant to systems performance. Furthermore, one of the most important aspects of the Ubiquitous Computing paradigm is that machines should be characterised by autonomy and context awareness to facilitate more intelligent interaction. Therefore, there is an opportunity to enhance computer systems with automated intelligence in order to permit natural and reliable interaction similar to human-human interaction. Although various techniques have been proposed for automatically detecting a user’s affective state using facial expressions, this is still a research challenge in terms of achieving a consistently high level of classification accuracy. The current research probes the use of facial expressions as an input perception modality for computer systems. Facial expressions, which are deemed the most effective input channel in the domain of Affective Computing, are generated from the movements of facial muscles from different regions of the face; primarily the mouth, nose, eyes, eyebrows, and forehead. Subsequently, due to the correlation between facial expressions and human emotions, it is foreseen that automatic facial expression analysis will endow computer systems with the ability to recognise human affective states. The presented study considers the use of facial point distance vectors within the representation of facial expressions, along with investigations into a range of supervised machine learning techniques, for affective state classification. Results indicate a higher level of classification accuracy and robustness is achievable, in comparison to using standard Cartesian coordinates from the fiducial points.
Route Rehearsal
Matt Ryan (University College Cork)

“Route Rehearsal” is a browser-based route learning application specially designed for UCC’s blind or visually impaired (V.I.) students. While the university’s Disability Support Service provides mobility training to enable students to get around the campus, learning the routes is a time-consuming process, especially at the start of semester. The project has developed a helper application that provides a way for students to familiarise themselves with a particular route before they attend training sessions. It therefore attempts to strengthen mobility skill through memory and recall of previously learned information, as distinct from the conventional approach of direct feedback on location, e.g. handle vibrations of proximity detectors or headphone instructions from a smart phone GPS navigation app.
An iterative design process in which usability testing with detailed feedback from blind/V.I. participants has progressively refined a design concept that is implemented with HTML, CSS and JavaScript. Two of the HTML5 standard’s new APIs – Web Audio and Web Speech – are combined to present a route in the form of step-by-step speech instructions augmented by sound samples taken from field recordings. Spatial audio effects are applied at certain route points to simulate the feeling of changing direction or to localise sound sources. In this way sound is intended to act as a mental trigger so that a person can adopt the correct orientation when later walking along a route. A fundamental aspect of the app’s design is that routes are generated from a database which is in turn built from a map drawn to the specific needs of the visually impaired and containing references to geo-tagged sound samples. In this way, a person’s recognition of locations is strengthened as their virtual routes criss-cross with each other over time, helping to build an effective mental picture of the campus grounds.
LISTEN
Niall O’Hagan (Limerick Institute of Technology), Denise McEvoy (Limerick Institute of Technology), Seamus O’Ciardhuain (Limerick Institute of Technology)

This research describes planned work for an MSc in Human Computer Interaction, exploring how digital storytelling can be utilised for creating an engaging interactive audio archive, in an effort to tackle issues of isolation and social memory for mature technology based users. For the purpose of this research, mature users are classified as users above the age of fifty-five. Everyone has a story to tell but for some there is no one to listen. Isolation is a common issue of the aging population of Ireland and worldwide, whose families may be located elsewhere or whose friends and loved ones have passed away. When isolation is not an issue, stories and knowledge can be simply lost from generation to generation and community members accounts of history vanish due to stories not being recorded or remembered when being shared.
Stories are lost forever if they are not archived, but this project aims to tackle this with the use of human computer interaction, deployed through User Centred Design theory via the development of a digital archive for chronicling audio based digital storytelling. This digital archive allows mature users to remain fully engaged in their communities by sharing and contributing to their past, present, and future. The User Centred Design process of desirability, feasibility and viability which runs through of the research began by examining the needs, desires and behaviours of the people who will be affected by the project outcomes, in order to fully empower and engage the user group in the development of the proposed artefact. The research conducted will endeavour to allow for new knowledge creation and best practice when designing interactive digital content for mature technology users.
Can existing apps support healthier food purchasing behaviour? Assessing the integration of behaviour change theory and user quality components in mobile apps.
Sarah Jane Flaherty (University College Cork) Mary McCarthy (University College Cork), Alan Collins (University College Cork), Fionnuala McAuliffe (University College Dublin)

Supporting healthier food purchasing behaviour is a key objective of many dietary interventions but achieving long-term change has proven difficult. Greater utilisation of habit theory in intervention design may be beneficial with potential strategies proposed by van’t Riet et al. (2011). Mobile apps offer a potentially effective approach for intervention delivery but some fail to adequately integrate relevant theory or user quality components which can limit effectiveness. The study aim was to assess existing mobile apps on their integration of user quality components and behaviour change theory relevant to food purchasing behaviour. Using pre-defined exclusion criteria a sample of twelve apps were assessed. User quality was assessed using the Mobile App Rating Scale (MARS). Behaviour change techniques (BCT’s) (Michie et al., 2013) were assigned to each strategy and used to assess theory integration. Findings suggest a lack of focus on food purchasing behaviour with most apps focusing on behavioural outcomes, such as weight management. Integration of theory was adequate with an average of three BCT’s present in selected apps. The most popular were goal setting (outcome of behaviour), self-monitoring of outcome(s) of behaviour, and conserving mental resources. User quality was good with an average MARS score of 3.8 out of 5. Lowest average scores were associated with the engagement and information categories. No significant relationship was seen between integration of behaviour change theory and user quality components. Existing apps could play a role in supporting healthier purchasing behaviour but improvements in design are needed to maximise effectiveness. Existing apps appear to integrate theory and user quality components but do so to a disproportionate extent with a continued focus on one over the other. This may diminish the user experience and limit effectiveness. Future work should assess the importance of these components to the consumer to inform effective mobile app development.
Towards Understanding How Speech Output Affects Navigation System Credibility
Benjamin Cowan (University College Dublin),Derek Gannon, Jenny Walsh, Justin Kinneen, Eanna O’Keefe, Linxin Xie (University College Dublin)

Navigation systems are widely used yet little is understood about how aspects of the interaction impact our assessment of these systems. Our work focuses on the speech output, exploring how accent and system errors affect our credibility judgements. Findings from a small-scale pilot study show that destination errors significantly affect user trust and competence assessments of a navigation system. People also rate navigation systems using speech output with a similar accent to their own as more trustworthy than a system using a different accent, irrespective of destination errors made. Future work looks to increase the scale of the study and add further conditions to explore the role of user nationality, accent and the geographical location being navigated on system credibility.
Tailoring an eHealth intervention for the treatment of over-weight and obesity: a person-centred approach- a work in progress
Kathleen Ryan (University College Cork)

In Ireland, 37% of adults are overweight and a further 23% are obese; recent estimates suggest that we are on course to be the most obese country in Europe by 2025 (Healthy Ireland Survey, 2015).This poses a huge economic and resource burden on our healthcare system, with the annual cost of overweight and obesity in ROI estimated at €1.13billion (Dee et al., 2013). Behavioural change interventions that seek to increase physical activity and reduce saturated fat and calorie intake result in weight loss and are effective in managing overweight/ obesity (Lang & Froelicher, 2006). However weight loss results vary, with some behavioural techniques being more effective for some people than for others, but we have yet to establish why (Maclean et al., 2015). In addition Irish people are among the highest users of mobile phones in the western world, with 4 million people having either a smartphone or access to a tablet, and three quarters of our population accessing the internet everyday (Connected Living Survey, 2015).
Given that our lifestyles now involve greater use of technological devices for work and leisure, eHealth interventions- delivered via the internet using mobile devices- present a practical strategy for delivering behaviour change interventions at scale. However, eHealth interventions suffer issues of low engagement and adherence (Yardley et al., 2015).In order to develop more effective and cost effective behavioural interventions to promote weight loss it is important to understand what behaviour change techniques work for what person and tap into the technological advances presented by ehealth strategies for obesity management. Aims: (i) To explore key behavioural and psychosocial elements influencing weight loss in a successful behaviour change intervention (ii) To create an ehealth intervention that may be tailored to account for individual difference between participants. (iii) To establish the effectiveness and cost effectiveness of the ehealth intervention.
A HCI-driven approach to web localisation for a more accessible multilingual Web
Silvia Rodríguez Vázquez (Dublin City University)

Successful access to information in the Web is heavily conditioned by the end user’s web browsing experience. This, in turn, depends on (i) the perceivability, operability, understandability and robustness of the website that is being consulted, and (ii) the performance of the user agents needed by the end user to retrieve and access web content. Altogether, these elements lay the foundation for a smooth and accessible human-computer interaction (HCI), particularly for people with disabilities, who rely on various assistive technologies (AT) to access the Web.
Drawing on prior work showing low levels of compliance with web accessibility standards and the scant attention paid to this issue within Translation Studies, our project seeks to advocate for a higher level of accessibility awareness and social responsibility in the production of multilingual websites, known as the web localisation process. We argue that, when adapting an existing website (from a linguistic, cultural and technical perspective) to render it multilingual, localisation practitioners should ensure the proper functioning of the final translated product. This would imply guaranteeing an AT mediated HCI experience equivalent to that of people who do not use AT. Encouraging an ‘accessibility thinking’ among localisation actors would avoid disruptive redesign efforts at later stages of the multilingual web development cycle and, ultimately, contribute to a more inclusive Web for all.
In order to identify which are the current accessibility gaps in localisation workflows, as well as to suggest how the implementation of accessibility standards could be seamlessly integrated therein, we are conducting interviews with CTOs and localisation engineers from leading localisation service providers to better understand how localised content is produced nowadays and what would motivate them to embrace more user-oriented strategies. Data collected will be triangulated with the output of an eye-tracking study measuring whether inaccessible web content can be easily identified as such by web translators and amended in the target product.
Exploiting Ubiquitous Computing, Mobile Computing and the Internet of Things to promote Science Education
Kieran Delaney (Cork Institute of Technology), Alex Vakaloudis (Cork Institute of Technology), Achilles Kameas (Computer Technology Institute & Press Greece), Ioannis D. Zaharakis (Computer Technology Institute & Press,Greece)

Many exciting new technologies are emerging, like Ubiquitous Computing (UbiComp), Mobile Computing (MobiCom) and the Internet of Things (IoT); in the following, we shall refer to them collectively as UMI. UMI technologies attempt to revolutionize everyday life by “allowing computers themselves to vanish into the background”. UMI applications are presumed as a possible next generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected devices: as a result, radical new uses of portable information technology emerge based on “the nonintrusive availability of computers throughout the physical environment, virtually, if not effectively, invisible to the user”. We describe the ongoing work in the H2020 project “UmI-Sci-Ed”, which is investigating the introduction of UMI interaction technologies in education. By carefully exploiting state of the art technologies in order to design educational tools and activities, the project is seeking to offer novel educational services, implement innovative pedagogies and enhance students’ and teachers’ creativity, socialisation and scientific citizenship.
The project has a primary goal to put these technologies in practice, so as to enhance the level of science, technology, engineering and mathematics (STEM) education that young girls and boys are receiving and to make attractive the prospect of pursuing a career in domains pervaded by UMI. The orientation of “UMI-Sci–Ed” is entrepreneurial and multidisciplinary in an effort to raise young boys’ and girls’ motivation in science education and to increase their prospects in choosing a career in pervasive, mobile computing and IoT.
We will describe ongoing work to develop an open fully integrated training environment that will offer 14-16 year old students and their teachers an open repository of educational material, educational scenarios, training material and activities, social tools to support communities of practice, entrepreneurship training, showcases, self-evaluation online tools, mentoring, and content and information management.
SenseCare: Sensor Enabled Affective Computing for Enhancing Medical Care
Kieran Delaney (Nimbus Centre Cork Institute of Technology), Alfie Keary (Cork Institute of Technology), Paul Walsh (Cork Institute of Technology)

This poster reports on the ongoing research in the H2020 SenseCare project, which is investigating new ICT solutions with the potential to enhance and advance future healthcare processes and systems. The project is using sensory and machine learning technologies to provide emotional (affective) and cognitive insights into the well-being of patients so as to provide them with more effective treatment across multiple medical domains.
The project brings together a diverse group of subject matter experts from industry and academia, with the objective being to develop technologies and methods that will lessen the enormous and growing health care costs of dementia and related cognitive impairments that burden European citizens, which is estimated to cost over €250 Billion by 2030.The poster will report on the current research tasks, focused on the primary objective of developing “a cloud-based affective computing operating system capable of processing and fusing multiple sensory data streams to provide cognitive and emotional intelligence for AI connected healthcare systems”. This system will deliver the user(s) cognitive/affective state data by applying this fusion approach using a range of different off-the-shelf sensory devices, and will exemplify the use of the platform through its application in the dementia care and connected healthcare domains.The poster will also describe the project’s key goals, methodology, and planned outputs, including:
• Specifying and engineering the architecture of the platform and releasing two versions of the platform cloud infrastructure during the project.
• Creating and evaluating two use case test pilots (relating to the dementia care and connected health medical domains) that integrate with, use and apply the services of the platform.
• Specifying and engineering a number of medical informatics applications that will run on the platform and that will also be tested and evaluated as part of the use case test pilot phases.
A participatory action research investigation into the value of agency in dementia
Sarah Foley (School of Applied Psychology, University College Cork)

Introduction: The aim of this research is to understand meaningful, agentive experience in community for people with dementia, with a view to supporting it in the context of Dementia Friendly Communities (DFC). While DFC have become one of the main focuses in the development of dementia care, most evidence of the progress consists of reports from charity bodies. While much work has been done to accommodate for the needs of people with dementia in communities, this research aims to further respect the personhood of people with dementia by understanding and supporting ways in which they can actively contribute to their communities.Method: Using participatory action research, this study aims to understand how people with dementia can be supported within DFC to increase their sense of agency. An ethnographic study is currently being carried out, focusing on the ways in which agency can be obtained through the development and maintenance of meaningful relationships. Findings from this study will result in the design and evaluation of a community based intervention for PWD to test how they might be supported to have meaningful agentive experiences in DFC.
Results: This ethnography will result in the development of a conceptual framework of ‘agency in community’ which will inform the design of a service/supportive technology that will be subject to an action-intervention based evaluation.
Designing for play: material investigations of facilitating playful interactions in the museum
Denise Heffernan (Cork Institute of Technology),Dr Kieran Delaney (Cork Institute of Technology)

This work in progress discusses the role of materials in designing objects that become playful and, in particular, the opportunities or limitations afforded by a range of actual materials in facilitating playful interactions in museum environments.
Increasingly museums have embraced gameful and playful design. Play is a form of understanding who we are and what surrounds us and how to engage with others; the social context of play is particularly important in creating memorable experiences. Research on promoting play within the museum has been focused on education goals. Little consideration has been given to designing for the intrinsic value of play itself. Museums have always been early adopters of technology and increasingly visitors have an expectation that they will encounter some form of digital technology during their museum visit. Digital technologies have been typically embraced with the aims of democratising knowledge, contextualising information and ultimately boosting visitor numbers.
This paper presents a work in progress that hypothesis that combining emerging technologies and playful design can help support a social and collaborative museum experience. Currently the research is investigating the semiotics afforded by different materials in potential playful objects, focusing on the actual experience of the materials used. We are investigating if and how materials can evoke playful behaviours or are there playful characteristics and other intangible meanings inherent in the materials themselves.
A novel human-computer interface creating a framework for the cognitive ergonomics of ECG interpretation
Andrew Cairns (Ulster University),Raymond Bond, Dewar Finlay, Cathal Breen, Daniel Guldenring, Robert Gaffney, Pat Henn, Anthony Gallagher, Aaron Peace 

Predominantly the 12-lead Electrocardiogram (ECG) interpretation process remains a paper based approach. However, the ECG in this format creates a significant cognitive workload for an interpreter due its complexity and the plethora of knowledge that is required to interpret an ECG. As a consequence, this often leads to incorrect or incomplete interpretation of an ECG. Even expert clinicians have been found to act impulsively and provide a diagnosis based on their first impression, and therefore often miss co-abnormalities.  To compound this, it is widely reported that there is a lack of competency in ECG interpretation. This therefore, leads to the demand to optimise the interpretation process.
With health services wanting to reduce costs by becoming paperless, we see an opportunity to use interactive human-computer interfaces to guide and assist the interpreter in the ECG reporting process. Therefore, a digital interactive computing system was manufactured to structure the cognitive ergonomics of a clinician while interpreting an ECG.  The system deconstructs a 12-ECG into a recognised five-step ECG reporting procedure. The five-step procedure is then presented across a series of interactive web-pages which prompts the clinician to progressively interpret the ECG. The system was developed responsively for an online environment with key principals including consistency, user feedback and familiarity of terminology for end-users. This therefore creates the provision for clinicians to access the system ubiquitously across a spectrum of platforms and devices. Human-computer interaction in healthcare is an important research domain as interactive clinician-friendly systems will continue to be implemented in health services to help guide the cognitive ergonomics of clinicians in practice.
A Smart Sensor Glove for Human Computer Interfacing
Brendan O’Flynn (Tyndall National Institute), Javier Torres Sanchez (Tyndall National Institute)

In immersive Virtual Reality systems, real-world visual and auditory cues are partially or completely blocked out and the user has a sensory experience of being inside the computer-generated world. The experience is made ever more real through the use of hand-held and/or wearable devices that in some cases deliver haptic feedback which invoke sensations of touch. To enable Human Computer Interaction (HCI) in this immersive fashion, high precision data acquisition systems need to be developed which are accurate, require minimal calibration and which provide real-time data streams wirelessly. The development of such a glove based system lends itself to multiple use cases including the Gaming environment and hand healthcare (e.g., Rheumatoid Arthritis (RA) monitoring).
This poster describes the development of such a data acquisition system in the form of a smart glove designed to meet user requirements associated with Accuracy & Precision, Consistent recognition of gestures, Low latency, and Haptic feed back. In addition, the novel IMU based wireless Smart Glove detailed in this paper removes the requirement for sensor calibration using accelerometers and gyroscopes teamed with intelligent software techniques.
Predicting decision autonomy through Situated Awareness: Towards a better understanding of informed consent on Health Social Networking Sites (HSNs)
Aideen Lawlor (Health Information Systems Research Centre (HISRC), Lynch, L., O’Connor, Y. and Heavin, C. 

This is the first study of the Consenting HeAlth Related Data through Social Media (CHASM) project, being conducted by the Health Information Systems Research Centre (HISRC) and funded by Wellcome Trust. The CHASM project will investigate the digital consent process for users sharing Personal Health Information (PHI) on Health Social Networks (HSNs). The aim of the project is to develop best practice guidelines for HSNs in order to obtain truly informed consent from users when agreeing to the HSN’s privacy policies and terms and conditions of use. These guidelines could also be extended out to internet mediated researchers and researchers obtaining consent electronically. The current consent process used by HSNs is speculated to be passive, users are believed to be oblivious to the privacy and security risks they are exposing their PHI to and therefore can not actively mitigate against these risks. The first study of the CHASM project aims to move beyond speculation, by establishing the degree of decisional autonomy afforded to users when they are consenting to the privacy policies and terms and conditions of HSNs and by identifying the influential elements of informed consent that are associated with decisional autonomy. A research model for the study was developed by integrating the elements of informed consent with the Situational Awareness (SA) framework, which is used for predicting decision making. The findings from this study will lay the foundation for the CHASM project by ascertaining how cognizant users are of consenting to the privacy policies and terms and conditions of HSNs and identifying the informed consent elements that need to be integrated into the design of the digital consent process to facilitate decisional autonomy.
Immersive Virtual Reality in Science Education
Aaron Bolger(University College Cork), Nadia Pantidi (University College Cork), Conor Linehan (University College Cork)

Aim: To understand the quantity and quality of work that has investigated the use of immersive virtual reality in science education in primary and secondary school.
Background: There are still significant challenges in the teaching and learning of scientific concepts. In particular, it has been found that students are failing to understand basic scientific concepts, or have hard-rooted misconceptions about them that are in contrast with scientific views (Smith & Neale, 1989) (Duit & Treagust, 2003). Researchers agree that students’ difficulties with learning science occur using commonplace pedagogical approaches (Hewson & Hewson, 1983), and that science is an informed knowledge-building process requiring participatory element to be effectively taught (Jackson & Fagan, 2000) (McFarlane, 2013) (Duschl, 2008). Immersive virtual reality (IVR) could be a useful tool for supporting these participatory elements.
Methodologies: This scoping review follows the methodological framework set out by Arksey & O’Malley (2005).
Results: The initial search yielded 843 results from 4 electronic databases. Of these, 5 results were found to be directly relevant to this review after refinement. It was found that studies seeked to either create an IVR intervention for teaching and learning science, citing it as useful and beneficial, or they sought to investigate how IVR aided in the participatory learning process. It can be postulated that work in this area is limited, as mirrored by Dede, Salzman & Loftin (1996), and as recently as Friena & Ott (2015). The research generally shows a positive attitude towards the potential of IVR for participatory science education interventions, however the evaluation of the studies in each paper is either missing entirely or lacking sufficient evidence to support this hypothesis.
Conclusion: Immersive Virtual Reality interventions have potential benefits that could aid in the learning outcomes of primary and secondary school science education, however more rigorous and structured evaluation methods must be employed in order to confirm or refute this potential.
Attentional Capacity and Clinical Performance: Eye Tracking Cardiologists Performing Simulated Coronary Angiography
Jonathan Currie (Ulster University),Raymond Bond, Paul McCullagh, Pauline Black, Dewar Finlay, Stephen Gallagher, Anthony Gallagher, Peter Kearney 

Simulation-based training is driven by patient safety and Kohn’s 2000 report ‘To Err is Human’ revealing up to 96,000 patients die every year (USA) due to medical error. Computer-based simulation has been proven to produce a superior skill set with less errors and better transfer of training in general surgeons. Eye tracking features recently have shown to discriminate between novices and experts in surgical settings. An aspect of performance yet to be analysed is attentional capacity (AC) and the corresponding visual attention (VA) from eye tracking. A PhD level study has been designed to capture visual attention during attempts of simulated coronary angiography while AC is tested. The Initial pilot study will recruit eight registrars and consultants. We hypothesise that VA is linked with AC and that expert surgeons will demonstrate higher capacity when tested.
The recording will take place in the ASSERT Centre, University College Cork, using a high-fidelity simulation suite. Participants perform a coronary angiography case twice alongside an additional task to measure AC. The task requires checking a supplementary display monitor and responding to playing cards when they appear. This added task acts as a measure of their AC. Primary outcomes will involve statistical analysis performed to determine the relationship between (1) AC and surgical performance, (2) VA and AC. If found that predictive metrics exist for good/bad performances at surgical tasks, that will have implications for research areas of Applied Computing, Human Factors and Human Computer Interaction with interventional cardiology. Wearable technology creates the opportunity for cost-effective assessment that provides insight to the trainee psychophysiology. This could predict task performance, including errors, uncertainty and more. This combined with machine learning algorithms could produce accurate computer automated assessment in training.
Advertisements