A metadata platform for visual media
Metadata makes today’s web work.Descriptive data provides the foundation for discovery, recommendation, and personalization — almost everything that we rely on the internet to help us with. But today, management believes there is no way to apply accurate metadata to visual media quickly and easily, at web scale. In a world where we share 1.8 billion photos and 432,000 hours of YouTube video every day, this is a problem.
Management believes that Tagasauris has reinvented metadata for visual media. Today, we solve the metadata problem for major content creators like Disney, ABC, AOL, Trunk Archive and Magnum Photos. Our proprietary (and patent-pending) platform focuses on three key areas:
We combine the efforts of humans and computers to apply networked metadata to visual media, at web scale. By applying metadata, we make a piece of visual media “content aware” — we teach a computer what the content of that image or video is.
We use the principles of linked open data and graph theory to store metadata in a semantic database. By doing so, we take our content aware media and we make it “relationship aware” — we teach a computer not only what’s in an image or video, but also how that content relates to the rest of the world.
We integrate our human assisted computing platform with our semantic database to create an evergreen knowledge graph that’s continuously learning and keeping our networked metadata up-to-date.
We have applied our technology to millions of images, resulting in: improved discoverability and increases in traffic, engagement, and sales for our clients. In 2014, we partnered with Disney and ABC to expand this capability to video content. This year we are looking to grow our client base to include a variety of other broadcasters, content creators, publishers, media and creative companies.
_________________
Source: KPCB estimates based on publicly disclosed company data, 2014 and YouTube.