As a technologist who markets to marketers, I can fall victim to self-doubt concerns about wanting my own marketing to be perfect.
Yet, as a technologist, I don’t claim to be perfect at marketing. I do claim to be discovery driven. To be open to trying things and looking for the learning out of them.
Most data consultants wouldn’t let you see behind the curtain to what they are currently experimenting on. They would deflect attention elsewhere. To things that are solid and working well.
But I’m not like most consultants. Because I believe if I truly want to learn then I want to draw attention to the experiment. How else will I get direct feedback.
I am a numbers person. I am looking at the metrics of this newsletter and list building. Experimenting with a lead magnet Ad campaign to get baseline figures for subscribes. Looking at the opens, clicks and unsubscribes.
I’m also experimenting with which day I send it. Hence you are getting it this week on a Wednesday. Creating an editorial calendar for my content. Trying to get ahead on writing things so that sending doesn’t slide to late Friday. A technologist learning to be a content marketer!
So that’s me. But what about you? Do you sometimes feel that experimentation can expose vulnerability? Do you have concerns that it can throw your performance numbers off? That it can not only make you look bad to the customer, but look bad to the business.
This past week I had an opportunity to listen to Alex Osterwalder on a free webinar with 100 Leaders Live. Alex talked about companies that run innovation projects while also delivering established products. While he is talking about line of business innovation, what he says applies to marketing those products and services. After all, marketing is needed to market both types.
Alex suggested that you measure and evaluate these two types of business differently.
For established business, you measure the effectiveness at a project level. To marketing this means measuring individual campaigns.
For innovative work, you also measure the effectiveness at a portfolio level. By measuring the performance of all the innovative tests in a period collectively, then you can absorb the ups and downs into an overall performance metric for innovation.
Interesting. It is a way to balance the risk of being vulnerable. A way to learn what really works. Then what works is moved to established business. What doesn’t is left behind.
What do you think? Do you experiment with how you market things? Is there tolerance in your company for things sometimes not working? How does this impact your performance measurement?