Interesting to see a video search technology making it to the big time (free time).
Podzinger uses speech recognition software to ‘search inside’ audio and video. The cunning part is that it’s able to do this to a repository of stuff (YouTube) and then add to its index. Looking at the Podzinger website they have a number of ‘content partnerships’ and I assume that this entails a form of notification of new content for indexing.
Many moons ago, at the BBC, I was excited by the notion that we could search our TV output for text strings. At the time, speech to text conversion was execrable (back in 2000) and so the idea didn’t fly.
Interestingly, the part of the BBC in which I worked was also responsible for the closed captioning (subtitles) and so in theory we had a time-stamped, “already text” option for searching. Unlike speech-to-text software the Carbon-Based Lifeforms who were creating the captions (think of stenographers on speed) were able to spell correctly the phonetically-challenging names and technical terms that defeat generalist conversion dictionaries.
While it’s great to see a move to increased searchability of visual material, I’d love to see more use made of the close-caption resource. Any one at the BBC reading this and fancy a quick mashup?? 🙂
It’d be worth it just to be able to search on “Sound of footsteps approaching” or “[loud music]”, not to mention the seminal “[Warm applause]”.