Introduction to the Dossier Issue "Studying human-computer interaction in translation and interpreting: software and applications

Digital tools are changing not only the process of translating and interpreting, but also the industry as a whole, societal perception of and research in translation and interpreting. This Tradumàtica Special Issue collects research on some of these topics, highlighting the importance of furthering research on human-computer interaction in translation and interpreting studies.


67
Advances in natural language processing and related technologies have increasingly important implications for translation, interpreting and cross-cultural communication.
Digital tools can change not only the translation and interpreting processes, but also how tasks are managed and commissioned in the language industry, how society interacts with and perceives language technologies, as well as how translation and interpreting research is conducted. This dossier issue of Tradumàtica provides a snapshot of research on some of these topics. The issue set out to examine a range of pressing themes in 68 Implementing technology and human-computer interaction in workflows is not just a technical but a socio-technical process (see Cadwell et al. 2018), which requires further research to provide a more detailed and nuanced picture of diverse translation and interpreting perspectives.
In relation to post-editing of machine translation specifically, we highlight two key aspects of how research is evolving. First, although this is a growing sub-field of research with an already strong body of work, the post-editing literature still has important gaps.
Differences in post-editor behaviour, for example, have been observed in many studies Conde Ruano in this issue). Resources such as machine translation or automatic speech recognition can be applied to this type of content not only to save time and costs for businesses and corporations, but also to expand access to information in a more equitable and fair manner (Nurminen & Koponen 2020). The increasing variety and sophistication of the devices on which different kinds of content can be accessed offer growing opportunities for accessibility while also opening up new avenues for training and research on the implications of multimodality in terms of reception, human-computer interaction, quality assurance and ethics, among others.
As far as interpreting studies are concerned, the increasing integration of technology into the interpreting workflow has sparked interest among researchers and trainers alike.
Interpreting is increasingly likely to involve remote communication technologies, automatic speech recognition and computer-assisted interpreting (CAI) tools (Fantinuoli 2019;Defrancq & Fantinuoli 2020). Furthermore, technology can also support analyses of translation of audio descriptions on three Dutch films. By manually evaluating these machine translations, they aimed at identifying typical errors that seemed to occur the in the language combination studied and, more specifically, identifying to which extent they may depend on the specific features of audio description as a text type. As a matter of fact, they find audio description to be a challenging text type for machine translation, because it is characterised by norms that can vary from one country to another, specific linguistic constructions, a high degree of multimodality and because interpretation of the content heavily relies on context. Vercauteren, Reviers and Steyaert identify potential avenues for further research in machine-translated audio description which, as the authors point out, is deemed to grow as legislation in an increasing number countries is requiring more and more audiovisual content to be made accessible.
Conde Ruano reports on the evaluation of a multilingual and accessible audio guide of the Facultad de Letras of the University of the Basque Country, aimed at supporting people with visual impairments in getting to know and moving around the building.
Besides describing its creation through a service-learning project involving a collaboration of teachers and students, the paper focuses mainly on the evaluation of the audio guide both in terms of the elaboration process and of the product, suggesting a protocol for the assessment of this product type in terms of compliance with recommendations and standards in the field and of functionality, through a testing and evaluation session that placed special focus on the target users' interaction with the online platform containing the guide and on the learning process involved in the whole project, from product creation to testing and evaluation.
Koržinek and Chmiel identify a typical challenge in the annotation of interpreting and spoken corpora in general: the identification of different speakers in the dataset. The Polish Interpreting Corpus (PINC) is an interpreting corpus of European Parliament Polish or English original speeches and their respective interpretations, and it includes over 190,000 tokens.
Koržinek and Chmiel present a method employed on PINC for automatically identifying voices using a deep neural network model, and offer a complete step-by-step tutorial for implementing the protocol, thus offering scholars working on interpreting and on spoken corpora in general new insight into ways to perform speaker identification in an automatic and effective way. This implementation of technology speeds up an annotation process that is known to be complex and time-consuming and paves the way for faster annotation of increasingly large corpora.
The contributions to this Tradumàtica dossier issue highlight the growing diversity of technology-mediated practices in the language industry and the importance of further study of human-computer interaction for translation and interpreting. We hope this issue will stimulate discussion and new directions for research in this area.