Closed Captions, Subtitles, AI Articles

Overview

AI-generated captions and subtitles can be enabled for live streamed sessions, as well as for simulated live videos and Video-on-Demand hosted on the Touchcast platform. For live-streamed sessions (which means any content streamed live to a pipeline, regardless if it is truly live or a pre-recorded video) the languages required, including English, should be specified during the pipeline setup.

Up to 6 additional languages can be added for live streamed sessions (total 7 with English).

VoDs can have more than that.

 

AI-generated articles are available for VoD consumption only. AI articles are automatically generated by Touchcast for all videos loaded into the platform and hosted on Touchcast, but they do not have to be enabled on the event page.

Dubbing is also only available for VoD. Dubbing involves a third party client.

 

Terminology

  1. Closed Captions (CC): Transcription of the dialogue in the original spoken language, provided for accessibility. Generated live by AI and edited by the Language Services team as part of the article editing process post-event for great accuracy. Live human captions can also be used, using a third party vendor.

  2. Subtitles: Translation of the Closed Captions. Can be generated live if set up in advance, or generated from the original language captions post-event (after article editing, for greater accuracy).

  3. Dubbing: Where a translated version of the original language dialogue is provided. Can be live-translated via human translators at a third party vendor, or recorded post-event, again, by a third party translation service, and dubbed language audio files added to the video file.

  4. AI Article: An interactive article which displays the AI-generated transcription of the dialogue alongside the video file in real time as the video plays. The AI generates highlights that also form part of the video’s AI-generated trailer. AI Articles are best to be edited to correct terms and name spellings as well as remove “ums and likes” and incorrect grammar.

  5. Chapters: A feature of AI Articles, chapters are subheadings within the transcript. They are generated automatically but can be edited in full, and new chapters can be manually added.

 

How To

Live AI-Generated Captions & Subtitles

Enabling live AI-generated captions & subtitles

Live AI-generated captions and subtitles are enabled at a pipeline level. A pipeline can withstand up to seven languages for a live stream, and these are typically enabled when the pipeline is set up.

The most common languages used are English, French, Spanish, Portuguese, German, Chinese (Simplified), Japanese, and Hindi.

Live captions and subtitles are generated per pipeline, so an event with multiple pipelines can have different caption and subtitles enabled for each pipeline.

Because languages cannot be enabled when a pipeline is actively running, it is highly recommended to confirm with the production team these languages are enabled prior to a live event. You can stop a pipeline and enable captions if needed.

Important: English language must the source language for the feature to work.

AI Articles

Generating AI Articles

AI-generated articles are only compatible with sessions that are conducted in English. The AI article in its original form will directly inform the AI-generated English captions, so it’s recommended you edit the AI article prior to generating any captioning or subtitles. 

To edit an AI article, you will first need to upload or assign the VoD for that particular session. To do this, open the Content Management panel and then select Tracks Agenda. From here you’ll choose the session for which you’d like to add VoD by clicking the pencil icon (“Edit Agenda”) next to that agenda item under the Actions column.

Scroll down the session entry until you reach the field that says VoD. You can now either assign a session that has already been cut from the pipeline or upload a VoD session. Once the video has been assigned and loaded, press submit.

 

You can access the AI article editor by either re-opening the agenda item and clicking on the Page icon to the right of the video, or by clicking the page icon from the Actions panel (“View VoD”) on the Tracks Agenda page.

 

 

This will then open a new window where you can view the video in the Touchcast Cloud interface. In the upper right of your screen, you will see three dots that read “More” upon hover. Clicking these dots will expand a dropdown menu; select Article Editor from the list of options.

 

Editing AI Articles

A new tab will open in your browser for the Article Editor. If you have very recently uploaded a video or cut a session from the pipeline the article may not have finished processing—if this is the case you’ll see a greyed-out article with a notice that the article is still generating. When an article is fully generated you’ll see a title, table of contents, and the AI-produced transcription of the video.

 

To edit the article, click anywhere within the AI-generated text and begin typing. Since the AI is subject to inaccuracies, you’ll notice some words highlighted in pink. This indicates where the machine was not confident in its transcription and may require closer review.

When editing words or passages, amend words one-by-one as needed. This is a necessary step because the article does inform the AI-generated English captions, and all subsequently AI-generated subtitles. Editing large blocks of text runs the risk of altering the time stamps of the captions, which means the captions won’t align with what is being spoken when they are generated. If you think you’ve affected the time stamps, you can always regenerate the article from the beginning by clicking the “Regenerate” button from the top navigation. 

 

Editing AI Articles - Highlighting 

You’ll also see certain phrases highlighted in yellow. These are sections the AI has identified as important and will be used for the trailer. 

You can highlight or un-highlight sections by selecting the text and clicking the Highlight Trailer button, indicated by the highlighter icon at the top of the screen.

 

Editing AI Articles - Chapters 

Another feature of the AI article is the table of contents and chapters. The AI will produce a table of contents and chapter titles that can be edited manually. To edit the table of contents you will first need to edit the chapter headings. 

You can edit the text of a chapter heading by clicking and editing or deleting text. If a chapter heading is in the wrong place, you can delete the chapter entirely by deleting that text.

To add a Chapter, hit the return key on your keyboard twice. This will generate placeholder text that reads “This is a sample header.” Adding, deleting, and editing chapter headers will update automatically in the table of contents. 

Chapters can also appear on the player bar of the VoD. This is a setting that you can enable globally, for all VoD, or individually (see the next session on Enabling AI articles on an event page for more information). 

 

It is very important to SAVE your AI article before exiting the tab, and it is highly recommended you save periodically to prevent the page from timing out. Click the save button in upper right of your screen and wait until you see a message confirming the article was saved successfully.

 

 

Enabling AI Articles on an event page

Once human review is complete for an AI article, you can now choose to enable the article on the event page. There are two ways to do this:

  1. Enable the article for all VoD sessions. This is achieved by going to the Settings panel, opening General Settings, and scrolling to the VoD section. Here you’ll check the box that says “Enable article in player by default.”

 

 

  1. Enable articles for VoD sessions individually. You’ll do this by opening the Content Management panel, opening Tracks Agenda, and clicking the Edit Agenda button of the session for which you wish to enable the AI article. Under settings you’ll toggle the button for “Enable Article View for VoD” to ON.

    1. If you want chapters to appear in the player of the VoD, you can also enable that feature by toggling the button for “Enable chapters.”

 

Enabling VoD AI-generated captions and subtitles

AI-captions and subtitles for VoD, or any video hosted on the Touchcast cloud, can be enabled from the CMS.

To enable AI-generated captions you must first upload or assign the VoD for that particular session and review the AI-generated article. 

Similar to accessing the article editor, you can access the captioning and subtitle panel by either re-opening the agenda item and clicking on the Page icon to the right of the video, or by clicking the page icon from the Actions panel (“View VoD”) on the Tracks Agenda page.

 

 

This will open a new window where you can view the video on the Touchcast Cloud. Again, click the three dots in the upper right; from the dropdown menu select the “Subtitles/CC & Audio” option. 

This will open the Subtitles/CC & audio tracks panel. To add subtitles and captions, click the + Add subtitle/CC track. This will give you a prompt to “Generate” a subtitle/CC with AI, or “Upload: a .vtt or .srt file. 

 

Regardless of which prompt you choose you will then be able to select the language of that subtitle/CC. 

  • To generate an AI subtitle/CC, select your language and click “Generate.”

  • To upload a .srt or .vtt file, select your language and then click “Upload.”

To delete an already generated or uploaded subtitle/cc track, click the three dots next to that entry and select “Delete.” 

 

Available Languages for Subtitles & Captions

The following list of languages can be used for AI-generated subtitles and captioning with videos streamed, or hosted on Touchcast Fabric and the Showtime platform. 

For VoD: Touchcast can support up to 10 languages subtitles (inclusive of English)

For LIVE: Touchcast can support up to 7 languages per track (inclusive of English)

  • Afrikaans (af)

  • Albanian (sq)

  • Amharic (am)

  • Arabic (ar)

  • Armenian (hy)

  • Assamese (as)

  • Basque (eu)

  • Belarusian (be)

  • Bengali (bn)

  • Bulgarian (bg)

  • Burmese (my)

  • Catalan (ca)

  • Chinese (China) (zh-cn)

  • Chinese (Hong Kong SAR China) (zh-hk)

  • Chinese (Simplified Han) (zh-hans)

  • Chinese (Singapore) (zh-sg)

  • Chinese (Taiwan) (zh-tw)

  • Chinese (Traditional Han) (zh-hant)

  • Croatian (hr)

  • Czech (cs)

  • Danish (da)

  • Dutch (nl)

  • Dutch (Belgium) (nl-be)

  • English (en)

  • English (Canada) (en-ca)

  • English (Ireland) (en-ie)

  • English (UK) (en-gb)

  • English (US) (en-us)

  • Estonian (et)

  • Faroese (fo)

  • Finnish (fi)

  • French (fr)

  • French (Belgium) (fr-be)

  • French (Canada) (fr-ca)

  • French (Switzerland) (fr-ch)

  • Galician (gl)

  • Georgian (ka)

  • German (de)

  • German (Austria) (de-at)

  • German (Switzerland) (de-ch)

  • Greek (el)

  • Guarani (gn)

  • Gujarati (gu)

  • Hebrew (he)

  • Hindi (hi)

  • Hungarian (hu)

  • Icelandic (is)

  • Indonesian (id)

  • Italian (it)

  • Japanese (ja)

  • Kannada (kn)

  • Kashmiri (ks)

  • Kazakh (kk)

  • Khmer (km)

  • Korean (ko)

  • Lao (lo)

  • Latin (la)

  • Latvian (lv)

  • Lithuanian (lt)

  • Malay (ms)

  • Malayalam (ml)

  • Maltese (mt)

  • Maori (mi)

  • Marathi (mr)

  • Mongolian (mn)

  • Nepali (ne)

  • Norwegian (nn)

  • Oriya (or)

  • Persian (fa)

  • Persian (Afghanistan) (fa-af)

  • Persian (Iran) (fa-ir)

  • Polish (pl)

  • Portuguese (pt)

  • Portuguese (Brazil) (pt-br)

  • Portuguese (Portugal) (pt-pt)

  • Punjabi (pa)

  • Rhaeto-Romance (rm)

  • Romanian (ro)

  • Russian (ru)

  • Serbian (sr)

  • Sindhi (sd)

  • Slovak (sk)

  • Slovenian (sl)

  • Somali (so)

  • Spanish (es)

  • Spanish (Latin-American) (es-419)

  • Spanish (Spain) (es-es)

  • Swahili (sw)

  • Swedish (sv)

VoD Dubbing

  • TBD

Time Estimates

  • Uploading a video takes roughly the 1-2x duration of the video.

  • Processing a video takes roughly the duration of the video. 

  • Generating the article (which at this stage is generally just a raw transcript) takes roughly the duration of the video.

  • Refining/editing the English article takes roughly 2-4x the duration of the video, depending on how many AI errors, inaudible words, and accented English.

FAQ

Question

Answer

How many people can edit an article at a time?

Only one person can be actively editing an article. You can, however, have multiple people working on different articles that are part of the same event or organization, so long as they do not belong to the same session.

How accurate is captioning/subtitles?

 

As English captioning is produced via Amazon Web Services (AWS), they will have approximately 70% accuracy. This is assuming the speaker is enunciating clearly and has good connection.

 

All other subtitles are based on the English captioning, so errors in the English will likely result in errors in translations. 

Are you able to prime the AI with certain phrases or technical terms? 

Acronyms, technical terms and some personal and company names are quite often misunderstood in the live-generated captions. Clients should be made well aware of this.

Can clients get involved in editing articles? 

This is possible! Though, if you are involving clients, this should be accounted for in the scope as it requires coordination and effort in onboarding clients to how editing works.

Clients must be aware of the protocols of editing an article (not copying/pasting, the way editing may impact timestamps of captions, etc).

Can clients submit human-edited captions and subtitles?

Yes! These can only be for VoD, however, as the pipeline will still generate captions and subtitles in real time.

The exception is if the client provides pre-recorded videos with the captions already baked in, in which case they should be aware of the safety zone so the captioning does not interfere with the player bar (see the Brand Guidelines deck for an illustration). 

How long does it take to generate an AI article? 

~ duration of the video