These are my links for October 23rd from 10:40 to 21:48:
I was really honoured to be asked to keynote at Econsultancy’s 2009 FODM in June 2009. I’d spoken before and found the audience to be tough but receptive. It’s one of the more difficult speaking gigs of the year, I find, and there’s always a pressure to perform well (and Ashley commenting that there’s “no pressure” of course just makes it worse… 😉 ). At least this year he didn’t promise I’d be “funny” (a throwaway remark that gave me my first night of lost sleep in 10 years as I imagined that I’d been forced to perform a standup routine at the Comedy Club without a script! Wah).
Anyway, the event was held at the rather spectacular Congress Hall and upon entering I realised both that is was a great presentation venue, and that 350 marketers is quite some audience 😉
There were some spectacular and energetic speakers (Jonathan MacDonald in particular), and two sessions that interested me in particular on digital publishing and ecommerce in retail.
The presentation went well and I used the time to reprise themes from previous FODM presentations, wander into the realms of Augmented Reality, build on some KPI discussions I’d been having with Michael “KPI” Ross and finally introduce the “Obama-Preedy Pricing Principle” (the result of a beer-supported discussion with Tony Preedy on how discounts and promotions should be related to a specific place, time and – increasingly – appropriate behaviours). Maybe it should have been the Pavlov-Preedy Principle??
You can see all of the presentations on the Econsultancy page (you need to be a subscriber), or you can see my presentation via the Slideshare link:
As the regular reader will know I’m a big believer that the convergence of location-based information, structured data, inferred/contextual relationships and a slick relevant interface will change our world and start delivering the sort of “future” interactions that we had in the 1960s’ SciFi.
Google’s Mobile App is a step closer.
I won’t rehash the explanatory video – it’s, er, self-explanatory – but the really interesting part for me isn’t the voice recognition but rather the emerging “common sense” in the google results. Note that there’s now an interpretive layer that’s interception calculations, directory-type enquiries (eg film listings, nearby restaurants) and informational or evaluative requests.
This is a major step forward for something that we tend to think of as a text-indexing service.
I’m a great fan of knowledge systems like TrueKnowledge (that has an inference engine built upon structured facts, questions and relationships – wonderful) – but it seems that Google’s slowly but surely adding equivalent capabilities by stealth and in parts.
Let’s start counting the days until this is seen as “just normal”…
UPDATE: been playing this morning at a client’s (different voices, male/female, Northern, Welsh, Australian) and we’re getting a one in five success rate. Still, that it even works 20% of the time is amazing and I’m sure it’ll train me to get clearer 😉