Skip to main content

Tag: APML

Profit per pixel second – pps?

Over the last couple of years I’ve had two concurrent obsessions when it comes to ecommerce: data and online merchandising. The former is the foundation of everything we do and sell online – product data, customer data, metadata, behavioural data… Increasingly, my interest in data has extended to behavioural and attention metadata, as well as the free(r) interchange of said data. The interchange is made possible with APIs, microformats and emerging XML standards in Attention Profiling Markup Language, APML. The open data and data portability movement is also vital for a future in which all sorts of data can intermingle, be mashed up and generally create valuable services.

I covered this for the last two years in presentations, culminating in my Digital Trends series given this month where we reach a level of ‘epiphenomenology’ and magic by extension of these trends. The slides for this presentatio are available on Slideshare.

In tandem, I’ve been working with clients and collaborators on advancing approaches to online merchandising – the art of selling online. We’ve covered this twice so far at the European eCommerce Forum (notes of the inaugural ECF are posted last year) and it’s also a module in the upcoming Certificate/Diploma in Internet Retailing. The aim of course is to maximise the profitability of the merchandised ‘page’ online.

This approach was fine where eCommerce was in a growth phase and customers seemed keen to spend evermore time online. However, in a saturated market there’s evidence that online customers are settling into a core group of a dozen retail sites (where ‘retail’ include aggregation/affiliate, voucher and cashback portals who – from a customer’s perspective – are simply alternative ways to shop). The battle now is for the customer’s attention as much as for their money once you have that attention.

These two themes come together in a measure for merchandising effectiveness – profit per pixel second.

This combines the notion of ‘yield per pixel’ presented to a customer, with the idea that one only has a given time in which to persuade the customer AND that those seconds have been ‘borrowed’ from the customer’s other activities, their other favourite sites or simply from calls upon their time in the ‘real’ world.

This approach means that we might no longer want to ‘retain visitors’ on our sites for a long time – rather, a quick, effective visit might be best for the customer. We can also start relaxing about multiple, short visits to our sites (for example research, or monitoring stock availability or trends) if we can see that contributing to sales. The ‘yield’ or profitability measure focuses our efforts upon getting the most profit rather than buying the highest turnover.

I’ve been doing some initial work on how this proposed measure might inform day to day merchandising activity, or even be measured (since we know that ‘not all pixels are created equal’), but I’d appreciate thoughts and help on this, not to mention alternative suggestions or rebuttals.

Do let me know either in the comments or via direct email, as well as volunteering to help with some data – in confidence, of course.

Google Mobile App – clever convergence of data, directory categorisation, location and interface

As the regular reader will know I’m a big  believer that the convergence of location-based information, structured data, inferred/contextual relationships and a slick relevant interface will change our world and start delivering the sort of “future” interactions that we had in the 1960s’ SciFi.

Google’s Mobile App is a step closer.

I won’t rehash the explanatory video – it’s, er, self-explanatory – but the really interesting part for me isn’t the voice recognition but rather the emerging “common sense” in the google results. Note that there’s now an interpretive layer that’s interception calculations, directory-type enquiries (eg film listings, nearby restaurants) and informational or evaluative requests.

This is a major step forward for something that we tend to think of as a text-indexing service.

I’m a great fan of knowledge systems like TrueKnowledge (that has an inference engine built upon structured facts, questions and relationships – wonderful) – but it seems that Google’s slowly but surely adding equivalent capabilities by stealth and in parts.

Let’s start counting the days until this is seen as “just normal”…

UPDATE: been playing this morning at a client’s (different voices, male/female, Northern, Welsh, Australian) and we’re getting a one in five success rate. Still, that it even works 20% of the time is amazing and I’m sure it’ll train me to get clearer 😉