Subtitlers on the Cloud: The Use of Professional Web-based Systems in Subtitling Practice and Training

The bourgeoning and rapid evolution of cloud-based applications has triggered profound transformations in the audiovisual translation (AVT) mediascape. By drawing attention to the major changes that web-based ecosystems have introduced in localisation workflows, we set out to outline ways in which these new technological advances can be embedded in the AVT classroom. Along these lines, the present study sets out to explore the potential benefits of cloud platforms in AVT training curricula by exploring ways in which this technology can be exploited in subtitling training. An analysis of current subtitling practices and tools, localisation workflows, and in-demand skills in the AVT industry will be followed by an experience-based account on the use of cloud-based platforms in subtitler training environments to simulate and carry out a wide range of tasks. Our study pivots around the idea that cloud subtitling might prove useful to bridge the technological gap between academic institutions and the profession as well as to enhance the distance-learning provision of practice-oriented training in subtitling. of adaptation ways of as well as acknowledge of these in


Introduction
Since the turn of the 21 st century, cloud-based platforms have been progressively introduced in the translation workflow of audiovisual media conglomerates and localisation companies with an aim to reduce costs, increase productivity and optimise networked environments. As highlighted by Cattrysse (1998: 10) over two decades ago: new technologies involve new types of communication, and therefore new types of translation, adaptation or message processing. New jobs are created as well as new ways of working conditions. These changes imply that research as well as training have to acknowledge the consequences of these evolutions in terms of terminology and method.
A statement of this nature stands out as a clear and enduring reminder that technological advancements and, in the particular case of AVT, the digitisation of the image, play a crucial role in each and every step of the translation practice. In this mercurial mediascape, the advent of cloud-based platforms has led to yet more groundbreaking changes in the AVT industry, in which a growing number of agents and stakeholders are progressively incorporating such tools in translators' workbenches (Díaz-Cintas and Massidda, 2019). Given the large volume of audiovisual productions that need to be translated into a myriad of languages, major media distributors are currently 3 developing brand-new localisation workflows that reside online, in an attempt to optimise the productivity, security and quality of their output.
This adjustment of AVT workflows to online ecosystems is not completely new as cloud-based translation platforms have been used by language service providers (LSPs) since the early noughties as a way of moving away from less flexible desktop-based solutions. Ultimately, their main objectives are to decentralise workflows, gain greater control of their audiovisual productions, closely monitor freelancers, liaise with clients, facilitate the work of project managers and boost access to translation toolkits among their freelance linguists. The growing number of localisation projects handled exclusively online is symptomatic of this ever-changing landscape (Baños-Piñero and Díaz-Cintas, 2015) and is exemplary of the many changes brought about by the technology turn in AVT (Chaume, 2013;Díaz-Cintas, 2013). While the latter turn emphasises the close relationship between AVT and technology from a general perspective, at the turn of the decade, a new phase of this technology turn now seems to lead to a cloud turn (Bolaños-García-Escribano and Díaz-Cintas, 2020), which is meant to reshape the boundaries of the various AVT professional practices in an on-demand, internetised and hyper-audiovisualised mediascape.
Although subtitling, as discussed in this paper, has been at the forefront of this evolution, the dubbing industry has, of lately, shown a growing interest in cloud-based solutions, with the development of dedicated platforms such as VoiceQ (voiceq.com) and ZOOdubs (zoodigital.com/services/localize/dubbing). Yet, due to space constraints, the present article will not delve into the dubbing sphere as it only explores the specific field of cloud subtitling. After delineating the rapid evolution and gradual incorporation of cloud-based platforms in the subtitling industry, this study will focus on the potential it offers in training environments, with the ultimate aim of shedding light on its didactic applications. By offering a detailed overview of the state of the art of cloud subtitling, this research outlines and proposes new approaches to the use of web-based systems in the subtitling classroom.
A clear distinction ought to be made, however, between those web-based systems that allow users to manage subtitling workflows (e.g. assigning projects to assets, monitoring the localisation process, producing purchase orders and invoices) and the ones used to perform the actual translation (e.g. spotting, translation and reviewing). This article will primarily focus on the latter.

Managing Cloud-based Localisation Workflows
Cloud subtitling platforms are web-based environments that can be accessed by anyone with the right permission, anytime, anywhere and from multiple devices as long as they are connected to the internet. These ecosystems offer subtitling editing toolkits that allow professionals to perform all typical tasks, such as spotting or cueing of the subtitles, translating and reviewing, while taking advantage of a set of cutting-edge digital functionalities that include automatic shot-change detection, wave-form representation of the audio track, and audio-scrubbing, to play and hear each frame in connection with the audio track. Most developers have also commenced to integrate, to varying degrees of success, other ground-breaking technology into their systems, such as computerassisted translation (CAT) tools, automatic speech recognition (ASR) software for transcribing dialogue, artificial intelligence (AI) via machine-learning algorithms for autocaptioning, and neural machine translation engines for the automatic translation of subtitles, normally followed by human-centred post-editing.
The QC process, meant to verify the final linguistic and technical output of the localisation workflow, can be partially or completely automated, depending on the user's preferences. Tools can perform a fully automated check of the linguistic and technical parameters required for a specific project: maximum display rates and reading speeds, overlapping subtitles, crossing shot changes, maximum number of characters per line, use of dashes in dialogue subtitles, and language-specific spelling. Within the system's file preferences and error-checking features, the user can select which errors the system should detect and correct, following the configuration settings. In semi-automated QCs, more room is left for professionals to intervene in the process and amend errors manually. Cloud subtitling editors might also offer conversion tools that can export and import subtitle files into multiple formats, as well as burning applications to embed and hardcore subtitles permanently onto the video.
Since the first decade of the new millennium, the aforementioned potential of cloudbased applications -as opposed to traditional shrink-wrap software -soon became apparent for the localisation industry. The secret for the appeal of these popular platforms, particularly in the case of LSPs, can be tracked down to some of the basic operational functions that they embed. As AVT projects tend to be complex in nature (i.e. multilingual and encompassing a wide variety of file formats), any changes that may contribute to make current workflows more agile are always greatly welcomed. communicates with the computerised version of the tool on the web, thus providing enhanced flexibility for subtitlers who might prefer to work offline as well as online.
Rather obviously, one of the key benefits of the cloud is that the entire localisation process of the company's internal workflow is performed online and can be more easily When it comes to security, rather than dispatching the audiovisual productions to the professionals involved in a localisation project, these systems ensure that copyrighted content remains in the company's server, without the need to be downloaded. Thanks to the use of data encryption technology, the cloud is not only secure but also traceable, thus giving the company the luxury of easily verifying who has accessed any given material, including the date and time at which it occurred.
This functionality also has the advantage of granting project managers a real-time overview of each stage of a localisation project, as they can quickly check on the work performed and stored in the cloud. At the other end of the process, clients who have been granted access to the cloud-based system can play a proactive role in the workflow by placing orders for new tasks, monitoring the progression of the ongoing projects and reviewing the output being produced in real time rather than at the end of the process only.
The cloud ecosystem allows for the output (e.g. subtitles) to be produced and converted into a vast array of different formats (e.g. .srt, .vtt, .xml) adding versatility to the localisation process as subtitle files can be subsequently processed by other external systems. The file conversion mechanism is meant to enhance accessibility and file sharing as well as to speed up the localisation workflow in the post-production phase and ease the delivery of the final output. In the case of revoicing, and depending on the nature of the project, the added value of the cloud is that translators and voice talents can record their soundtracks from the comfort of their home, without having to travel to a recording studio.
Little to none has been done so far on the teaching of subtitling project management (PM) with the use of tools, as opposed to better-known translation PM platforms, such as XTM, Memsource, and Transifex, which are more widely used in technologically led translation courses. Some of the reasons for this situation are the scarce number of PM subtitling tools available and the fact that they are usually proprietary software, exclusively owned and used by LSPs. As the rate of technological change is swift, finding the right piece of software to be adopted in the classroom may as well be seen as a daunting experience. Yet, AVT being a markedly industry-led discipline, it is expected that the teaching of new and emerging web-based PM tools will soon start to be incorporated into the curriculum.
In contrast, cloud subtitling tools are comparatively more readily available, as discussed in the following sections. Today, AVT courses are greater in number than they were only a couple of decades ago. In the particular case of subtitling, on account of its practical nature and the arguably limited demand for such professionals before the digitisation of the image, the teaching was traditionally done internally, within companies, or externally, by vocational institutions (Kruger, 2008). The growing demand for qualified subtitlers generated initially by the arrival of the DVD, and more recently by the vast amount of content produced and distributed by video-on-demand (VOD) streaming giants such as Amazon Prime, Disney+, HBO Now, Hulu, and Netflix, has had a knock-on effect on the academic offer available to students. In line with this unprecedented demand for subtitling services in the market, a large number of translator training centres and higher-education (HE) institutions have decided to offer subtitling courses both at undergraduate and postgraduate levels.
Subtitling, also known as text timing in the industry, is defined as a translation practice that consists of adding synchronised text on screen, usually at the bottom of the screen, which can be either an interlingual or intralingual translation of the original dialogue, narrations, songs, and relevant text on screen (e.g. inserts). From its inception, and due to the very nature of this practice, subtitling has always been closely linked to technology and, since the mid-1970s, it has been performed with the use of specialist software that allows subtitlers to take full control of the spotting, i.e. the insertion of the in and out times at which the segments of text will appear and disappear. The functionality of most professional subtitling programs on the market has been improved at an incredibly fast pace in recent decades, with some of the leading commercial manufacturers being EZTitles (eztitles.com), FAB (fab-online.com), Spot (spotsoftware.nl) and BroadStream Solutions (broadstream.com), the latter the developers of the subtitling editor Wincaps Q4, whose interface is illustrated below:  The interface of desktop-based subtitling programs often integrates numerous key features that facilitate the creation of subtitles, such as (Fig. 1, A) a subtitle area, including text box, in and out timecodes, an indication of the subtitles duration in seconds and frames, and the subtitle display rate, which can be set in words per minute and/or characters per second; (B) a video area, with a player to render the clip and simulate the subtitles; (C) a media timeline, i.e. a subtitle bar with or without audio waveform representation, and with or without screenshot detection; and (D) a toolbox where to select the layout, font style, positioning, safe area, punctuation and other parameters in order to adjust the settings of a given project.
The localisation of an audiovisual production encompasses a number of technical constraints, as well as socio-linguistic and cognitive challenges that need to be carefully handled by professionals. Therefore, the subtitling process requires the activation of multiple skills including: the ability to analyse the needs of the intended audience, to match the verbal to the visual; the ability to comply with deadlines, commitments, interpersonal cooperation, team organization; the ability to express oneself concisely and succinctly and to write with a sense of rhythm […]; the ability to adapt to and familiarize oneself with new tools; and the ability to self-evaluate in order to revise and assess the quality of the output (Gambier, 2013: 55).
Translation competence, understood as "the knowledge, skills and attitudes necessary to be able to translate" (Hurtado-Albir, 2017: xxv), enables an individual to carry out the cognitive operations required in a professional environment and integrates "various types of capabilities and skills (cognitive, affective, psychomotor or social) and declarative knowledge" (Hurtado-Albir, 2007: 167). It only follows that translating audiovisual textswhich are fundamentally multimodal and multimedial in nature -requires additional competences with a particular emphasis on technology. Among the most recent classification of AVT-specific translation competences is the one drafted by Cerezo-Merchán (2018), who distinguishes between contrastive, extralinguistic, methodological and strategic, translation problem-solving, and instrumental competences. The latter refers to the mastery of specific software (e.g. subtitling systems) and highlights the importance that technological literacy has for would-be translators. This competence-based model, which draws on previous studies by Hurtado-Albir (2015) and the PACTE group, abides by the premise that AVT-specific courses should be designed paying attention to lesson contents, learning objectives and outcomes, and competences as per the premises of constructive alignment in education (Biggs and Tang, 2011).
Instrumental competences are particularly relevant in subtitling training as they entail the mastery of specific software and the ability to work with a variety of multimedia files and web architectures. They should be conceived as part of the subtitling training from an early stage, thus acknowledging the importance of using the right technology in the The ultimate goal of AVT training programmes should be the acquisition of specialised translation competences that can later be put to good use in a professional context, with the objective of avoiding the so-called "second-level digital divide", whereby students end up with "drastically differentiated skills" to the ones needed in the industry, which in turn influence the way they participate in society (OECD, 2010: online).
The mastery of a wide range of language and AVT technologies is becoming an increasingly essential skill for all professional translators, and courses designed to 9 As a way to promote learners' agency, a student-centred approach should be prioritised: hands-on tasks accompanied by step-by-step instructions (e.g. easy guides, screen recordings, and video tutorials) on how to use specialist technology is crucial in promoting learning independence and students' empowerment in human-machine interaction. A relevant role in subtitling training is played by the inter-dynamics between the learners and the software tools assisting their activity. Teaching cloud subtitling does not simply translate into embedding the latest technology into the learning process, it also means that AVT trainers need to be aware of how well the course aligns with the industry's expectations.
As cloud-based ecosystems gain ground in the industry and their role becomes more significant, it is imperative that today's students are exposed to them so that they can become familiar with these systems and appreciate their potential. As previously discussed, this may prove challenging as no cloud PM tools for audiovisual commissions are available for teaching purposes and access to the various creating online tools can be secured under certain circumstances only. Currently, knowledge about these platforms can be acquired in various forms: some companies may be willing to allow tutors to use their proprietary platform for a one-off presentation to the students or, alternatively, the institution can purchase licenses of the cloud-based tools so that students can practice with them and gain hands-on experience.
Ideally, web-based platforms should allow students and trainers to emulate real-life professional environments as closely as possible. By reciprocating the same decentralised system employed by LSPs, trainers can increase students' understanding of the localisation industry mechanism and, as a consequence, enhance their future employability prospects.

Cloud Subtitling Case Study: OOONA's Online Toolkit
Cloud-based architectures designed for the production of subtitles tend to replicate similar features to those offered by desktop programs, along with advanced functionality and the added bonus of incorporating more interactive technologies that allow for a comprehensive overview of the whole end-to-end subtitling process, from reception of a commission to delivery of the final product, as discussed in section 2. The cloud platforms available in the market are often company-specific, as previously suggested, so the following case study will centre on one of the few commercial cloud solutions available for practitioners.
The case study we discuss in these pages is based on OOONA's Online Toolkit, which was first academically explored in a one-day professional course on cloud translation technologies offered by University College London in 2015. Since then, a series of workshops have been held in various academic institutions in Europe and beyond, in an attempt to test its usability, gauge the users' experience and expand tutors' and students' knowledge of cloud subtitling.
The early, evolving version of the OOONA's Online Toolkit revealed its shortcomings when compared to the functionality provided by more advanced professional desktop- 10 based software at the time. Yet, with the passing of the years and through close collaboration between the authors of this research and the software developers, the platform has seen a dramatic overhaul, with regular ad-hoc upgrades and newly integrated features that are second to none and squarely address the needs of professional subtitlers as well as translation students. Nowadays, and among its many other proprietary translation solutions such as their OOONA Translation Manager to administer workflows, the company provides a state-of-the art set of cloud subtitling tools that can be trialled and purchased by companies as well as by individual users. As such, they are a most suitable learning and teaching opportunity to come to grips with the new cloud-based environments.
Their online subtitling editor, which is the subject of the following discussion, integrates a set of applications that allow the completion of all the traditional subtitling-related tasks, i.e. creation (section 4.1), translation (section 4.2), and revision (section 4.3) of subtitles. The editor has been developed to cater for the production of interlingual as well as intralingual subtitles or closed captions for the d/Deaf and the hard-of-hearing audiences. It also allows users to automatically transcribe the original audio, to make use of some embedded machine translation engines, to import and export files into a myriad of formats and to burn and encode subtitle onto the video, which are features that were not commonly found in subtitling software until recent times.
The tools are offered in the form of a basic application (e.g. OOONA Create) and a more sophisticated one (e.g. OOONA Create Pro). Upon login into the platform, users can select their tool of choice to perform any of the specific tasks illustrated below: The customisability of hotkeys in OOONA, which is a common feature among cloudbased subtitling tools, allows the user to choose how to operate the video player and the various subtitling commands. Normally, when subtitlers are introduced to a new piece of software, they have to learn a whole new set of shortcuts set by default within the system. The cloud has now led to greater agency for practitioners who can decide the hotkeys they prefer based on potential previous experience with other software (e.g. shortcuts from subtitling programmes they already master) or on ergonomic considerations (whether they use a laptop or a desktop computer with a smaller or 11 bigger keyboard or a specific area of the keyboard only, such as the numeric keypad).
In the classroom, this process of familiarisation with the operational intricacies of the cloud-based tool, and comparison with other pieces of software, helps to hone and refine the students' set of transferable skills.
The next four sections introduce some of the key activities that can be performed in the cloud-based subtitling platform and that can be exploited in the classroom.

Text-Timing Application (Create)
The OOONA Create and Create Pro, the basic and advanced text-timing tools, are used to produce and cue subtitles from scratch. The latter initially performs an audio and visual analysis of the video via the OOONA Agent, an installable plugin that allows the system to communicate with the device or web repository where the video file is stored. This functionality lets the system detect shot changes, sounds and frames, and  The system allows users either to upload locally stored videos (on a personal device) or, alternatively, to work directly with a video URL hosted on online platforms such as YouTube, by simply copying and pasting the video's web address. From a pedagogical perspective, this is a more hassle-free and time efficient approach to secure video material as neither teachers nor students are required to download, encode, or otherwise edit the video before it can be used.
Moreover, the file-uploading limitations of some learning platforms, including Blackboard Collaborate and Moodle for example, are automatically overridden when the materials are available through a single token or URL. The benefits of this holistic integration are evident: in the process of creating subtitles, the time and effort spent uploading video files are zeroed down, and more time can consequently be devoted to the learning of spotting, translating and revising. In addition, the versatility of URLs is crucial for teachers during the preparation phase, while the virtually infinite selection of video materials offered by user-generated content on YouTube minimise potential issues related to copyright infringement, thanks to machine learning algorithms integrated in the system.
While performing the spotting of the subtitles, each keystroke is automatically saved, thus ensuring users will not lose their work whilst working online, and making the whole process worry and stress free. Yet, the fact remains that in those areas where the internet connection may be too slow, intermittent and erratic, the process of spotting might become frustrating: a solution to this downside could be offered by the added option of using the platform also in offline mode.
During the whole creating process, any temporal or spatial discrepancies with the predefined parameters set for the project are marked in red and flagged, so that they can be addressed and amended on the go. In short, the Create and Create Pro tools can be used to produce templates but also to cue and translate, particularly in the case of small projects, hence constituting a very useful tool for subtitling training. The output can be saved as a project (.json), thus retaining more information than a simple subtitle text file, such as spatial and temporal parameters and settings, colours and also the link to the video, which is most fruitful when preparing exams and tests that all students have to take. Said file can be used either online, for future access by users, or downloaded onto their personal device, so that the output can be shared with other colleagues, who can in turn open it on the same platform. Alternatively, the material can be downloaded as a subtitle file (e.g. .srt), which in essence contains the timecodes and the text, and can then be used in any other subtitling editors. The tool creates .ooona subtitle files and allows for conversion into a variety of different formats, including .dfxp, .fcpxml, .pac, .rtf. .stl, .txt, .xml, and .vtt.

Template Translation Application (Translate)
Among its many features, this cloud-based tool offers the option of originating or working with pre-timed templates, or "master files' (Georgakopoulou, 2006), which are "working documents used in the professional world to maximise resources and cut costs' (Díaz-Cintas, 2008b: 97). The use of templates is today an enduring reality in the subtitling market (Nikolić, 2015;Georgakopoulou, 2019), which legitimises their inclusion in subtitler training too. Italian respectively, in this specific example) coexist at the same time: represent the target while the lower box (in grey) contains the source subtitles.
As the target-text boxes in the right are filled in with the new subtitles (Fig.5), the coloured bar below each subtitle box will progressively pass from green to orange to red as a way to measure the reading speed, in characters per second (cps) or words per minute (wpm), thus alerting users when the translation needs to be further condensed so that it can be comfortably read by the audience: In the example above, subtitles 11 and 12 show a reading speed value over 26 and 23 cps, respectively, which exceed the maximum reading speed set in the file properties.
In this case, students can either re-adjust the timecodes, merge the subtitles if the template is not locked, re-segment the dialogue, or apply omission techniques to make the text shorter. This type of exercise prompts students to think critically and apply the text reduction techniques learnt in class by also stimulating decision-making skills and awareness of how the source and target languages compare in terms of sentence and word length, meaning making, and oral vs. writing style.
Once the task is completed, users can run a series of semi-automated checks to correct any punctuation issues, highlight potential blank spaces and empty subtitles, call attention to any timing or reading speed violations, and warn against potential typos and spelling mistakes. As discussed in the case of Create Pro, users can either save the project online, or export and download it as .json to store it in their devices. Alternatively, the subtitle file can be exported in the preferred format so that it can be opened in other editing tools.
The use of predefined templates allows for the implementation of task-based approaches with the aim of exposing students to a set of scaffolded activities, either guided or open ended. The use of tasks can help to guide the student from beginning to end in the form of step-by-step activities and thus constantly monitor the alignment of learning outcomes and teaching objectives. Tasks can be tailor-made and structured depending on the students' command of the software and the lesson's objectives.
Guided activities may present a set of variants, such as error-solving tasks based on either linguistic of technical challenges. Templates can also contain ad-hoc errors, e.g.
incorrect timecodes, or subtitles to be split or merged, and so on. This kind of task can prove most valuable, be it to prompt students to modify subtitles linguistically (e.g. condensation or reformulation) or technically (e.g. amending timecodes) . As for openended activities, students should be more independent and be able to make decisions so as to achieve a set of predefined goals, which often consist of producing a highquality final template to showcase their technical expertise, followed by template

Template Revision Application (Review)
The revision process can be performed using the Review tool. This application displays the translated version alongside the proofread version: all the changes can be made visible by colours, red or green, highlighting the differences between the two versions before finalising the output. The Review tool tracks all the amendments, as shown in Figure 6, and, in a pop-out window (Fig. 6, A), it displays a summary of all additions, editions, and deletions done by the reviser for the benefit of the translator: The system allows users to edit the source file, which can prove very useful to report any technical or linguistic infelicities that the translation may contain. In Figure 7 below, the original subtitles 10 and 11, on the left-hand side column, have been modified by the reviser, on the right-hand side column, to better adhere to standard practice and the parameters set for any given project. Both subtitle numbers are followed by the "?" symbol, inside a white box, to indicate that they have been subject to changes: 16 Revised subtitle number 10 maintains the same timecodes and duration (i.e. 03:17) than the original one but the line break has been adjusted to coincide with a natural syntactical division in the sentence. Subtitle 11, on the other hand, has seen more changes. As the original text reflected a reading speed of 20 cps, which is considerably higher than the maximum 15 cps for this project, hence the red alert, the in and out timecodes have been re-adjusted to allow for the subtitle to remain a bit longer onscreen (03:19 instead of 03:11). The line break has also been reconsidered, the previous subordinate sentence has become an independent one, thanks to the deletion of "because" and the adverbial "much' has been deleted to make sure that the ensuing, condensed text stays as close as possible to the maximum display rate of 15 cps.
For ease of reference, all the changes incorporated to the subtitles are itemised in the so-called Editing Diff Summary (Figure 8), which provides a chromatic overview of all the editions in form of deletions (red highlight) and additions (green highlight). The user can then download a copy as a .doc file with a snapshot of both versions. Students can greatly benefit from having both files simultaneously displayed to compare which of the two versions is more appropriate. On the one hand, this tool allows students to double check their own work and spot any linguistic or technical errors that might have previously gone unnoticed. On the other hand, it is extremely valuable for teachers to assess their students' work as the track-changes functionality would allow tutors to share the corrections and include written feedback in the form of comments in each subtitle.
Also pedagogically relevant is the fact that it enables teachers to set up ad-hoc peerreviewing activities. Students can be trained to observe, analyse, assess, correct and comment on their peers' work, i.e. a translation produced by a classmate. It can equally be used in the context of a subtitling project simulation activity in which the work is split among different project members, and students have to revise each other's work in order to achieve a finalised product (i.e. fully subtitled video). This type of activity triggers analytical skills and ensures teaching remains collaborative, motivating, and also inspiring (Georgiou et al., 2018). In addition, the interconnectedness brought about by peerreviewing activities, particularly when developed within a cloud-based learning environment, may help to promote teamwork among would-be freelance translators, 17 whose professional environment is known to be quite solitary, and enhance remote cooperation.

Burn & Encode application
Finally, the Burn & Encode tool will complete the subtitling workflow by embedding the subtitles produced in the previous phases into the video file. As displayed in Figure 7, the interface is very clean and users only need to upload the subtitles and the video file and press the burn command that appears below the images: Burning the subtitles is a rather straightforward step, but the processing time depends on the duration and technical properties of the video employed. As it happens, this application requires to manually upload videos as they need to be analysed and processed by the OOONA Agent plugin. Despite its limitations, this tool, still missing in most free and paid subtitling systems, is pedagogically impelling as it allows students to produce a tangible output of their labour. Once the subtitle file is considered final, students can produce a video with embedded subtitles that they can then view using a video player of their choice and share with whomever they want. This step contributes to creating an overall feeling of fulfilment; a true indicator of motivation and engagement, which are key driving factors in higher education (Biggs and Tang, 2011).

Conclusions
Subtitling is a key and arresting professional activity in today's AVT industry, experiencing some exciting developments though also facing some defies in the age of cloud technologies. The localisation and distribution of audiovisual programmes for the global market is both accelerating and diversifying in terms of content and number of languages (Chaume, 2018). In addition to the higher volume of commissions, the industry is also confronted with numerous other issues like security, copyright infringement and frenetic deadlines brought about by the immediacy and urgency with which subtitles are often required. Against this backdrop, swift technological advances have concentrated on the deployment and establishment of cloud computing as a way to respond to some of these challenges.
Although some educational centres are slowly catching up with these new technological trends, the reality is that the gap between academia and the industry on this front is far from being bridged. Given the direction of travel in the AVT industry, programmes of study specialising on the teaching and learning of the various AVT services will be wise to embrace the use of cloud-based tools and expose their students to real market practice. The methods used for the training of future subtitlers are undergoing substantial transformation and, in these pages, we have suggested that subtitling trainers ought to adapt and update teaching techniques and materials in order to incorporate the use of the latest technology in the classroom. By adhering more closely to the industry's conventions with the scaffolding of subtitling tasks and the migration of the learning environment onto a cloud platform, trainers will contribute to honing the development of translation competences as important as the instrumental ones.
As it stands, the available opportunities to employ these systems in the classroom are rather limited as most cloud-based ecosystems are proprietary and for the exclusive use of the company's staff. Others, like the toolkit described in these pages, can be purchased by anyone on a rental basis. The positives of such tools being hosted on the web have been expounded upon in the previous sections of this article. Yet, the fact remains that so far these solutions tend to be a set of independent modules that allow the practice of all subtitling related tasks in a rather compartmentalised manner. In this respect, the didactic potential of these web-based tools could be highly enhanced with the development of ad hoc training platforms that take a more holistic approach and can ultimately become self-sufficient training spaces. Such an online ecosystem would not only host the various apps needed for the performance of the different subtitling tasks but would also ideally incorporate a PM interface that can then help students gain a greater insight into the overall workflow of AVT localisation. An educational cloudbased subtitling platform would also have the potential to improve trainees' teamwork and communication skills within a collaborative environment, into which social networking applications can be integrated to facilitate exchanges among peers.
At a time when remote education is gaining ground, on the back of the disruption caused by the spread of the COVID-19 pandemic, cloud-based platforms come across as a much-needed solution. A platform of this nature could not only overhaul the dynamics of the teaching methods and expand the remit of the content to be covered to encompass PM, but it would also easily broaden the membership of the class to include students from all corners of the world.
To guarantee that any new developments on this front meet the needs of all those involved in the pedagogical process, empirical research on the teaching and learning of subtitling, and revoicing, from an user-centred interaction is needed to better understand how best to exploit these applications in the classroom. Usability tests of beta versions with students and trainers would help in the design and development of an interface Subtitlers on the Cloud: The Use of Professional Web-based Systems in Subtitling Practice and Training Revista Tradumàtica 2021, Núm. 19 19 that is intuitive and user-friendly, by testing the various prototypes on potential users.
Such direct input on how real users cope with the system could also shed light on the manner in which subtitlers-to-be learn by doing and on the way in which the newest technical developments can be best integrated into the training.
As the cloud turn has opened the gates to a new range of technologies that are here to stay, AVT trainers need to be ever-more imaginative and proactive to incorporate them into their teaching to make sure that training in AVT continues to be up-to-date and relevant to the needs of the industry.