Skip to main content

Tag: data

Profit per pixel second – pps?

Over the last couple of years I’ve had two concurrent obsessions when it comes to ecommerce: data and online merchandising. The former is the foundation of everything we do and sell online – product data, customer data, metadata, behavioural data… Increasingly, my interest in data has extended to behavioural and attention metadata, as well as the free(r) interchange of said data. The interchange is made possible with APIs, microformats and emerging XML standards in Attention Profiling Markup Language, APML. The open data and data portability movement is also vital for a future in which all sorts of data can intermingle, be mashed up and generally create valuable services.

I covered this for the last two years in presentations, culminating in my Digital Trends series given this month where we reach a level of ‘epiphenomenology’ and magic by extension of these trends. The slides for this presentatio are available on Slideshare.

In tandem, I’ve been working with clients and collaborators on advancing approaches to online merchandising – the art of selling online. We’ve covered this twice so far at the European eCommerce Forum (notes of the inaugural ECF are posted last year) and it’s also a module in the upcoming Certificate/Diploma in Internet Retailing. The aim of course is to maximise the profitability of the merchandised ‘page’ online.

This approach was fine where eCommerce was in a growth phase and customers seemed keen to spend evermore time online. However, in a saturated market there’s evidence that online customers are settling into a core group of a dozen retail sites (where ‘retail’ include aggregation/affiliate, voucher and cashback portals who – from a customer’s perspective – are simply alternative ways to shop). The battle now is for the customer’s attention as much as for their money once you have that attention.

These two themes come together in a measure for merchandising effectiveness – profit per pixel second.

This combines the notion of ‘yield per pixel’ presented to a customer, with the idea that one only has a given time in which to persuade the customer AND that those seconds have been ‘borrowed’ from the customer’s other activities, their other favourite sites or simply from calls upon their time in the ‘real’ world.

This approach means that we might no longer want to ‘retain visitors’ on our sites for a long time – rather, a quick, effective visit might be best for the customer. We can also start relaxing about multiple, short visits to our sites (for example research, or monitoring stock availability or trends) if we can see that contributing to sales. The ‘yield’ or profitability measure focuses our efforts upon getting the most profit rather than buying the highest turnover.

I’ve been doing some initial work on how this proposed measure might inform day to day merchandising activity, or even be measured (since we know that ‘not all pixels are created equal’), but I’d appreciate thoughts and help on this, not to mention alternative suggestions or rebuttals.

Do let me know either in the comments or via direct email, as well as volunteering to help with some data – in confidence, of course.

DigitalShorts: blackboards, magic, google history and porn

So then, to Brighton, for another outing of my Digital Shorts presentation, arranged by Econsultancy (see events calendar on my business blog).

It was lovely to have an excuse to visit Brighton again, and a quick, chilly wander along the seafront with my new-old Minolta CLE and tack-sharp 28mm Elmarit-M and a roll of Fuji 1600 ASA (golfball-grain)… results shared in due course if acceptable – although do see the results from Paris last month…).

The venue was cosy and there was an interesting group – many digital agencies, a sprinkling of retailers and some software vendors.

The fun began (ahem) when we realised that there was neither a projector nor a screen available. A couple of frantic calls later and we realised that they were ‘lost’. Hmm. In the Hove lanes we could see into people’s Home Offices and so was tempting to have Craig push in a door and ‘borrow’ a 40″ plasma, however in the end the cafe downstairs lent us their menu blackboard and – drumroll – a piece of chalk!

So – with the support and chuckles of the assembled, alcohol-fuelled crowd, I cracked on with a presentation with the power of waving hands and – yes – chalk 🙂

It was a laugh and the questions from the audience were tough, robustly-put and really engaging. I had a great night.

Indeed, I _knew_ it was a cunning group by the way they took my demonstration of Google History to heart. I’d mentioned how APML and attention tracking were alive and with us, witness Google’s history (and showed mine, noting how one should be careful sharing this in case of compromising past activity!).

Anyway, after the presentation I left my laptop at the front for people to see some of the demos and realised that a couple of people were looking a little _too_ sneakily pleased with themselves (yes, you know who!).

Turns out that they’d indulged in a little guerilla history frigging, gently porn surfing (along with the kindergarden ‘reset home page’ routine) in order for this to appear in my history: excellent!

I know that an audience has taken my points to heart when we see this sort of behaviour 🙂 I can teach them no more than this 😉

During the evening we took a journey that looked at the phenomena that occur when ever-better structured data, metadata, behavioural data meets open, free exchange over increasing numbers of nodes. We then considered further possibilities – ‘epiphenomena’, if you will – and how these in short order would become indistinguishable from ‘magic’.

It was a great opportunity to think a bit beyond the pressing commercial exigencies of 2009 and envision the services we’d be engaging with in a couple of years.

If you’re interested in seeing the slides they’re online at Slideshare:

Ps071 Digitalshorts Manchester

View more presentations from ikj.
Finally, the event’s been covered on Twitter via the #digitalshorts tag:
https://search.twitter.com/search?q=digitalshorts
Finally, I’m going to be delivering a similar presentation for the Sense Network on 25 February in London – see my calendar for details.

Upcoming speaking in Manchester and Brighton [Digital Shorts – Retail, Recession and Emerging Trends]

For the last couple of years I’ve presented at the “Digital Shorts” events in Manchester and sometimes Leeds (but this year ‘and Brighton’).

The events – organised by Econsultancy and regional partners – are an opportunity for digital marketers and ecommerce folk to meet for drinks and discussion based around a review of the Christmas/2008 trading and predictions of emerging trends.

2 years ago I said we were in denial about a recession; last year we covered social media platforms and rich media; while this year we’re going on a data journey where data + social + behaviour + exchange leads towards epiphenomena. Or ‘magic’ (since, as Arthur C Clarke said in 1961: “Any sufficiently advanced technology is indistinguishable from magic”).

If you’re going to be in Manchester let me know since I think we’ll ‘do dinner’ afterwards, while in Brighton it’ll just be beers and the late night train home!

Details of the events are on the Econsultancy shiny new website:

Manchester Digital Shorts, 4 February 2009

Brighton Digital Shorts, 11 February 2009.

Updates and echo-locating via Brightkite.

Digital Shorts – Retail, Christmas, Recession and Emerging Trends for 2009 | Econsultancy

 

Google Mobile App – clever convergence of data, directory categorisation, location and interface

As the regular reader will know I’m a big  believer that the convergence of location-based information, structured data, inferred/contextual relationships and a slick relevant interface will change our world and start delivering the sort of “future” interactions that we had in the 1960s’ SciFi.

Google’s Mobile App is a step closer.

I won’t rehash the explanatory video – it’s, er, self-explanatory – but the really interesting part for me isn’t the voice recognition but rather the emerging “common sense” in the google results. Note that there’s now an interpretive layer that’s interception calculations, directory-type enquiries (eg film listings, nearby restaurants) and informational or evaluative requests.

This is a major step forward for something that we tend to think of as a text-indexing service.

I’m a great fan of knowledge systems like TrueKnowledge (that has an inference engine built upon structured facts, questions and relationships – wonderful) – but it seems that Google’s slowly but surely adding equivalent capabilities by stealth and in parts.

Let’s start counting the days until this is seen as “just normal”…

UPDATE: been playing this morning at a client’s (different voices, male/female, Northern, Welsh, Australian) and we’re getting a one in five success rate. Still, that it even works 20% of the time is amazing and I’m sure it’ll train me to get clearer 😉