In modern newsrooms speed and accuracy are everything. The difference between breaking a news story or being an “also ran” can come down to the efficiency of your workflow in the newsroom. The ability to capture, store and find information is central – the quality of your journalists’ output relies heavily on their ability to locate and obtain relevant background information to the story that they are writing.
In this environment, the sophistication of your archive and its ability to offer up its treasures effectively can be the difference you are striving for. So, an airplane crashes and your ENG crew are first on the scene – the reporter needs relevant information fast – what is the safety record of this particular aircraft; when was the last time an aircraft crashed leaving this airport; what was the weather like at the time of the crash? This is the kind of information that turns a sketchy scene of the crash report into a well-researched and insightful news story.
But within the constantly changing landscape of the newsroom it is not so easy to maintain your archive. Do you find the task of indexing material difficult to do and time-consuming in your newsroom? And do you struggle to find content that you know exists but have not the time nor resources to locate it within the timescale of news? If you do fall into either or both of these traps you are in good company – in our experience at Dalet these are very common problems.
When is the right time to archive your news content?
Traditionally, archiving used to be done after broadcast. Stories and related videos were indexed then archived after they had been aired. But, there are new techniques being evolved that involve bringing archives into the heart of the news production workflow. This means starting the archiving process during media ingest!
Many of our customers have developed a small team of “media coordinators”. This team monitors everything that enters the workflow: they add time-coded tags to any relevant content at the earliest possible stage. This automatically creates a clip or a sequence of clips than can then be quickly identified and directed to a journalist and a group of journalists as they work on a news story.
Straight away, this is valuable metadata information will travel alongside the content throughout the entire life cycle of the story, up to the archive. This is a key term, which we call metadata inheritance. Put simply, it means that you don’t have to restart each time and enter metadata from scratch.
But there is much more to it. With story-centric workflow, this approach enables users to aggregate a diversity of information sources into the same news story quickly and easily:
- Related wires, video material
- Different versions of the story that evolved over time – for example, the plane crash will certainly make a lot of stories with new elements, but this is essentially the same story.
- Associated scripts, graphic objects, web links, metadata coming from dope sheets
- Broadcast right information
- GPS information
- Occasionally, journalists will manually enter metadata
All of this information is generated as part of the production workflow so when it comes to the archive, the challenge centres on the capture of high-resolution content on a cost-effective storage medium (traditionally a tape library, however these days more and more storage is utilizing a public or a private cloud). The proxy video remains online. The metadata is there and it is essential since it makes the content searchable.
Within a typical newsroom environment there is little or no time available to re-index each story. If this process can be automated and integrated within the main workflow, it offers major time and cost savings compared with manual indexing!
And what’s next – harness the power of the semantic web
Once you have put an infrastructure in place that is capable of organizing metadata, managing media and providing search tools (it is what we call a MAM at the end of the day), then you can think further ahead – the creative possibilities are considerable.
Developed by Sir Tim Berners-Lee back in 2001, the semantic web is a web of data that can be processed directly and indirectly by machines with very little human intervention. It requires clever technology underpinning data mining, but from a user’s point of view it presents a way to identify and manage complex relational links between broadcast assets and information in a simple and readily understandable fashion.
Semantic web tools make it easy to explore and correlate multiple sources of information, to evaluate what they are finding, and to explore links and lines of development that may never otherwise appear.
Semantic web tools enable broadcasters to make sense out of what was previously unorganized metadata. Using this technology you can organize your media assets into topic groups, and gain a far better understanding of the personalities related to a story. And importantly, you can easily relate different stories to each other.
At Dalet, we are working on ways to integrate this technology within newsrooms and other production environments. The starting point for this process is simplifying and automating the essential metadata entry process – already we are doing this with many of our customers.
If you believe that it’s time to turn up the burners on your newsroom archive systems, then we would love to talk to you and explain how Dalet can make that process easier and more productive.Contact your local Dalet sales office today and let’s kick off that conversation.