Theory: Wearable tech (hardware) company is a software company

Standard

There has been an explosion of interest in wearable tech or hardware companies in general lately. From meetup groups to accelerators specializing in hardware to a playbook for wearable tech on TechCrunch, everyone is trying to figure out if this new trend is a big new opportunity or at the very least a fad to capitalize on.

I have the following theory:

Wearable tech (or hardware) is connected physical (usually plastic) user interfaces (input devices) to web based software or apps that try to be more a sticky (or natural) way to input useful data and/or interact with software on a mobile device (phone). Without advanced software, there is no value in the hardware.

Wearable Tech solves a problem with web based software — you have to use a mouse (or touchscreen) and keyboard to input data. Phones are better than laptops or desktops as they are more portable but essentially they aren’t ideal. Popular apps like Foursquare which can be incredibly useful to both marketers and customers have been limited because ‘checking in’ has a terrible workflow.

I have to pull out my phone, open an app, type or tap on stuff, then put it back in my pocket.

Why can’t my phone stay in my pocket? Enter Pebble. In the very early days Eric would insist that people really don’t want to pull out their phones every time they wanted to check why it was buzzing. A quick glance at a watch would be a huge improvement. A lot people agreed (I love my Pebble).

What Pebble (and other wearable tech) can do is create that sensor enabled connection and data collection/consumption that is a more natural or low cost interaction. It still requires software. In fact it should be built on software that adds value and the hardware is the sticky part people can’t live without.

Nike Fuel Band, Fitbit, etc.

Update April 21, 2014: There isn’t a lot of information yet but it appears Nike is at least scaling back its Fuel Band and focusing on software. Not sure that should be taken as failure but rather pointing towards the fact this class of wearable tech is experimental. The band wasn’t Nike’s core product but it has allowed them to understand how sensors, software, and their shoes can interact with customers.

Hardware companies that ignore the software and overall User Experience (UX) are in trouble

RIM (or Blackberry) is an easy example to pick on but if you think about it they gained popularity not because they had great hardware but the way Blackberry OS manages messages is arguably still ahead of everyone else. Apple focused the overall UX parts that RIM didn’t focus on: how the customer interacts with the brand, device appearance, apps, carrier billing, photos, and music. For managing messages iOS is still terrible. But the entire User Experience is important.

For upstart hardware companies the UX is super important and should not be ignored. Just because your hardware does something novel does not mean someone will buy it (and use it). Your hardware interacts with software and you need to think about how that happens, always.

Companies like Kiwi are offering products that will enable hardware companies to spend more time figuring out the UX than how to get working hardware prototypes. More useful tools are on the horizon.

There are a lot opportunities in wearable tech and the internet of things

The number of hardware companies that are in trouble because their software only runs on their hardware and isn’t networked is a lot. The opportunities created by having a very powerful and connected ‘brain’ in a phone changes the focus to the UX advantages you can design, the software to enhance that, and spend far less time on extremely complex (and costly) development of hardware that needs to do everything. The low hanging fruit in wearable tech and hardware is building the software that lets you use your phone as the brains behind an array of sensors.

Also, what is being missed in all this hype is that things like hearing aids are the original “wearable tech” and the technology built into them along with how they get to consumers is way ahead of everyone entering the space now. Highly advanced devices in the larger medical space are likely where the big wins are going to be. 

The first round of wearable tech was medical. Then the landscape changed with powerful mobile computing devices (phones). This next round is about people building highly advanced hardware that connects to software for something with more than an incremental improvement in User Experience.

Advances in connected devices brings us one step closer to ubiquitous computing. Exciting times!

10 years of blogging: coder to dad to entreprenuer

Standard

In April 2004 I started blogging. When it started, I wrote about things that I would have posted on uw.general (the wild west of amazing backchannel at U of Waterloo once upon a time) – status updates on the main web page, standards, and other interesting things. That evolved into an interesting timeline of life events over the years. In looking back I can see my transition from a coder working away at web stuff to a dad and entrepreneur. What I learned going back over my blog’s 10 years:

  • Writing more means I have become a better writer or expect more from my writing which means I blog less.
  • Going through my old posts reminded me that startups need community more than anything – that is what gave me the confidence to build one.
  • It is fun to build things. I don’t want to ever stop doing that.
  • I need to shift back to a balance of sharing life events and writing about things I am passionate about.

This is my current top 10 in the last 10 years.

  1. Back then I was really excited about web development, this is when I first started thinking about Ruby on Rails in January 2005.
  2. It wasn’t until the summer of 2006 when I really got excited about development — that summer was a big with the development of some interesting things on rails.
  3. January 2007 my first son was born (and it was mentioned in the Daily Bulletin at the bottom!) – I posted about the next 3 kids but this one was the first.
  4. January 2007 started the mobile project that became VeloCity. As part of that project we built a twitter clone, UW Chatter. It didn’t go anywhere but it was cool.
  5. I started a new job in with the Special Projects Group and I was President of the University of Waterloo Staff Association – that work inspired TribeHR for me.
  6. StartupCampWaterloo was launched. It was small. In early 2008 we hosted the second one at it was big, over 100 people attended including the infamous David Crow and future CDL G7 member Jevon MacDonald. Then in the fall of 2008 we got really excited about the Startup Community in Waterloo at StartupCampWaterloo3 even though the economy was falling apart.
  7. TribeHR was unveiled at DemoCampGuelph - that demo had a bad connection to the projector, lots of laughing, and 4 years later it was acquired by Netsuite.
  8. IgniteWaterloo started and I did the opening presentation as a last minute stand in!
  9. The moment I truly felt VeloCity was successful and the startup community in Waterloo is heading to an awesome place with the amazing 7cubedproject.
  10. I learned how important things like fishing with kids are.

In 2013 and 2014 so far my posts have almost been entirely focused on the work I am doing. The last 2 years have seen a big shift in my focus to family but that doesn’t come out in my blog at all. I will work on that.

The next 10 years are going to be fun!

The Market for Credit and Supporting Entrepreneurs

Standard

Over the last few years of growth in Accelerator or Incubator programs, the overall media coverage of early stage tech startups has increased in Canada. The lack of coverage before the programs existed made media coverage a metric of success. For any entrepreneur support program to be relevant there is a requirement to be mentioned in the media resulting in the Alumni Success Metric as a key metric used to identify success of any program.

I think we need to find a better way to measure these programs and the effect on the problem they are solving.

As more and more programs compete on this metric they spend more on marketing to rise above the others which results in an increase in the costs to deliver a program. I believe competing on this metric can foster animosity between programs and hurts collaboration between a large number of extremely talented people.

What is the problem?

Founders are taking advantage of everything offered to them (as they should) which results to this common scenario in Canada (not based on any particular company).

  • Founders went to University of Toronto (and/or Waterloo and/or Ryerson and/or WLU and/or insert school here) and worked out of Banting and Best (and/or the Garage and/or the DMZ and/or any coworking space).
  • Someone else on the team took a pre-accelerter or some other community education program.
  • They are clients of MaRS and Communitech and Halton Innovation and…
  • OCE has awarded them a grant., MaRS IAF will invest in them, IRAP might have had a role.
  • They might get into another accelerator program before they finally get a few key investors at the table and start to grow.

When they get VC funding or something big worth a media push, what happens? Up to 10 organizations want to be listed and each of them release a story about how proud they are. Few if any list the other organizations or programs or people that helped (because the list is huge).

How this may hurt entrepreneurs?

Funding and product announcements aren’t success, they are a milestone that is blown way up in the local media as a result of everyone getting excited (excitement is good, celebrate the good things). It is possible that the positioning of programs media releases could confuse the market that the company needs to reach.

That said, the media coverage froth is likely localized to Canadian media so it probably has no effect on where the companies market likely is: the United States.

This intense market for credit can be frustrating for everyone who delivers programs. In reality it takes a community to raise a startup. From funders that have done it before to programs designed to focus attention, lower the risks associated with getting started, and build peer groups. We should all celebrate the entrepreneur and collectively be excited there is so many people out there helping them.

The metric is good for something.

Where I think the Alumni Success Metric does work is that helps inspire new founders. Knowing that good things have happened for those that come before them in the same program is the same metric Higher Education uses to recruit undergraduate and graduate students.

How do we avoid the zero sum game around credit?

The metric is not useful for defining the success of any program as most of the support happens in parallel in accelerators or incubators. It is extremely difficult to know what helped and when and where or what made the difference. It creates something for programs to compete over when they should be collaborating.

The stories about companies growing shouldn’t be “x program’s y company has done z” but instead be about how the company achieved this milestone and all the people that helped along the way.

A metric needs to exist that can demonstrate how effective a program is without having each program battle it out with marketing.

Step #1 is that we have to stop thinking of service organizations or accelerators or incubators as startups. They aren’t. They are philanthropic organizations offering a support group and networking services for founders, funders, and service providers.

The main goal is not to build sustainable models around these organizations (how can most realistically generate revenue outside of an education or philanthropic model?) but build a sustainable ecosystem that doesn’t require the current level of philanthropic support. Every philanthropic organization should hope that one day the problem they are solving is no longer a problem. That should be no different with supporting entrepreneurs and everyone should work together to achieve that outcome.

What is the problem accelerators are solving?

Standard

There is currently a preoccupation with accelerators in the entrepreneur world resulting in a large increase in programs.  Arguably, the result of this frenzied growth is that ‘entrepreneurship’ is as commoditized as college. Unlike college, it is extremely hard to know which programs are adding value and which ones are wasting everyone’s time. This doesn’t mean investors aren’t in the know and they are favouring the programs they like – example, YC or TechStars.

It could become (or has already become) virtually meaningless to be an accelerator born internet entrepreneur so why would you give up 6-12% of your company to do it? For investors it is really hard to cut through the noise. I think this is because few people actually know why accelerators exist at all. In some cases I fear that the people that are creating new ones aren’t likely clear on why they are creating these programs either.

How does anyone know which ones work? What problem are they solving? What metrics should they be tracking in order to get better at what they are doing?

Defining the problem(s) accelerators solve.

There are three problems I think accelerators are trying to solve:

  1. Investors need to identify talent.
  2. Talent needs to find the right investors and coaches.
  3. Education system failure.

The first is a relatively easy problem to solve. It is hard for investors to identify talent at an early stage, accelerator programs offer a filtering tool for investors as they can take the top talent that applies and narrow it down to those that have the highest potential based the criteria of the particular program. If an investor trusts the filtering job done by the accelerator than that accelerator is providing value.

A suggested metric for this: measure how many alumni of a program receive funding, from what type of investor, and in what time span?

The second problem that talented people and teams have is finding the *right* investors and coaches. By the right investor I mean someone that will give you enough money and coaching that you can slowly de-risk your startup a little more and build momentum as you grow towards being a sustainable business. Founders need coaches to apprentice under while they build their company. The right investor is someone who will put in enough of their own money and time and they can help you get your business through the major milestones it faces. This likely means that party rounds are bad. What I think should be the goal are 4-6 investors and/or an individual (not a VC) has a 1/2 to 1/3 of the total round.

This should result in the person(s) who put in significant capital also have a board seat and have their sleeves rolled up ready/able to help.

A suggested metric: track who put in the most personal money in the round and are they on the board of directors or some other significant role in the company? How much time a week/month do they spend with the founders?

The failure in education is a much harder problem to solve. Is it the traditional silos that are limiting education or is it the expectation that you go to school to be trained for a job or a bit of both or something else? Is the failure the education system (K-12) or is it the students or both?

In higher education you have environments that are designed to encourage independent thought that is backed by facts and thinking. You should be exploring and developing your networks. At no other point in your life will you be surrounding with that much leading edge research and thinking. Just because a school doesn’t hand you your first startup with funding and office space does not mean the education system is failing entrepreneurs!

There is also already a process for very smart people to apprentice under others that have already developed their ability to take massive amounts of information and focus it on an outcome. It also happens to come with a filtering mechanism built right in that improves the likelihood that the person that finishes is relatively in the top few percent. It’s graduate school.

The process is not perfect but it is a process that works. Educating people is hard. Coaching people is harder still. If an accelerator is going to solve the failure of the education system in educating entrepreneurs it should take that part very seriously and not dismiss the education system as having nothing to offer.

A suggested metric: Does the accelerator have qualified educators and coaches that put in a significant amount of time (more than 1 hr a week) with each entrepreneur? Are there measurable outcomes expected on the entrepreneur? Are there consequences for not meeting expectations?

Accelerators should be more than marketing to the entrepreneur and placing them in a zoo for the public to see them in action. Education is serious business and it is about people’s future. Entrepreneurs need to have realistic expectations and enter with a clear idea of what they want out of the opportunity.

Everyone around accelerators is still learning about how to make them work and for whom. It is an exciting time in education — just be sure to track stuff that matters while you run the experiments!

A Perspective on Investor/Mentor Whiplash

Standard

The other day Fred Wilson posted an opinion and some tips on Investor/Mentor Whiplash. He took the position that that is a big problem for accelerators as well as early stage and seed environments. Brad Feld took this as a bit of a misunderstanding on accelerators, he insists that TechStars creates an environment where early stage companies can learn to manage the whiplash. Brad Feld states:

I disagree with Fred. It’s not a big problem. It’s the essence of one of things an accelerator program is trying to teach the entrepreneurs going through it. Specifically, building muscle around processing data and feedback, and making your own decisions.

On the surface this seems correct. A problem (one of many) new founders face is the overwhelming barrage of mentorship (good and bad) and information mixed with the inability to filter. An accelerator should be able to provide the environment where a strong group of peers with some guidance can help to build the “muscle around processing data and feedback.” In the last 6 years I have noticed that is a common problem founders face and their ability to manage it is important to their success. It wasn’t until I experienced the whiplash myself a 2nd and 3rd time that I fully appreciated the damage it can do even if you are prepared for it.

Generally what I tell early stage founders:

  • Only talk to customers once you have something to show them — but that shouldn’t take you a long time, don’t go heads down for months. Asking people what they want and not focusing on something specific they can touch/feel is a path to busy work and infinite sadness.
  • Avoid the mentor parties/socialization. Find two (or three) good people with opposing views and bounce specific data off them but only when you have done something that requires fresh eyes to advise you how to interpret the results.
  • Focus on what isn’t working when getting feedback from mentors. Founders need to be positive but you need to focus on the bad things when talking to your close mentors that have been through it already. If they can’t help you with the tough stuff why are you spending a lot of time with them?
  • Don’t expect a direct answer. Experienced mentors know you are the best person to run your company, not them, and they have developed a way of not telling you what or how to do things but instead challenge you to figure it out in a positive way.

Whiplash from mentors doesn’t just happen in startups, it happens everywhere people are giving you advice or have something to gain by influencing the decisions you are about to make or the opinion you develop on something.

Being prepared and learning to manage the whiplash isn’t just the essence of accelerator programs, it is the essence of education that culminates in the top level you can achieve to filter information – a phd program. At the phd level the filter muscle is almost too strong but that is a topic of a whole other blog post.

The scary thing for entrepreneurs is that accelerator programs are too often run by people that don’t know how to effectively educate people and/or they have something to gain financially by the decisions founders make.

I think this *is* a big problem in accelerators. I wonder if the ability to teach that skill to founders (or select founders that already have that skill) is the difference between a successful accelerator (which is really only TechStars and YC) and one that isn’t (pretty much everyone else)?

What is the Value of the ‘Alumni Success’ Accelerator Metric?

Standard

There is a rapidly growing number of support groups and organizations that fit into the category of accelerator or incubator. What is the value one of the core metrics many accelerators use today ‘Alumni’ success and what does success mean? That success metric can be a funding event, exit, or some other significant milestone that has been made public. Each has a different value but the purpose is to say something positive about the accelerator which is, for many, a key ‘metric’ used to report back to those that back the program.

Why is it a metric at all? After all, shouldn’t accelerator/incubators be focused on making money? The role of accelerators in Canada according to Mark MacLoed:

…these programs are not meant to help investors discover the next giant. They are there to help investors and mentors identify, nurture and develop talent. In smaller markets like Canada’s, we are sorely lacking in proven, been there, got the t-shirt talent.

I generally agree with Mark as the purpose of all these programs is to build a funnel of qualified talent which has a value to those that back the accelerator. That could include investors and/or the government (the tax payer). I will add that what happens with all this support is self guided experiential ‘business education’ but that is another blog post.

When an accelerator releases something that states “congrats to cohort company x on raising money from y” how should future applicants weigh those releases when the company is an alumni of more than one program? How should the people that support these organizations value those announcements?

As a recruiting and reporting tool I can certainly see the value in getting the organizations name out there. The problem is that it starts to sound like a ‘party round’ where so many people have been involved in some way it is impossible to say who made the difference. The truth is they all helped. The value for any one organization is not as high as it would be if the company only worked within a tiny controlled system. That simply isn’t happening and that is a good thing. It takes an early stage business ecosystem to build more frequent and bigger success which includes all points between here and the valley.

This is something very similar to the education system where every school at all stages of education can share in celebrating success of their alumni. If a higher concentration of success is coming from a particular school then that won’t go unnoticed and it should be supported and enhanced.

What I hope will happen peacefully and relatively unnoticed by the entrepreneur is that the organizations that have more success grow, those that low success fail, and new ideas are injected into the process as everyone keeps learning.

 

Accelerator Metrics and Developing Entrepreneurial Talent

Standard

I spent a little time at StartupWeekendHamilton3 in April as a mentor and was talking to one young founder that proclaimed that there was one great accelerator in Canada. Who he said it was surprised me a little and got me thinking, what makes an accelerator “the best” and why should an eager founder care? The baseline in my mind is Y-Combinator. No one can argue it is the best seed stage accelerator based on its results. What is difficult for everyone to agree upon is what does it do to achieve those results or even harder, what defines success?

In my opinion the key things it does:

  • Social Capital via Paul Graham – how he teaches founders and the hacker culture he has built provides entrepreneurs with access to the very best social capital that exists for anyone starting a technology based company.
  • Peer mentorship – the structure of the 12 weeks enables peers to hold each other accountable. This competition amongst comrades is powerful as it turns around the human nature of playing to our own strengths and pushes founders to “keep up with the Jones’s.”
  • Hungry founders – funding is minimal. After a bit of a bump it has since been decreased and I would bet if you look at the successes out of YC the biggest ones started off with the least amount of financial resources.

There is some striking similarity to what YC does and the thinking/observations behind the Goldmine Effect by Rasmus Ankerson (watch it, it is interesting). The basic point is that if you can find the talent that has the potential vs the talent that already been refined you will get a better result. Money and facilities do not make a difference, identifying underdeveloped talent does. I think there are three core factors that go into determining the quality of a given program.

  • Where is the program located? Are there companies in the immediate area just a stage or two ahead that can help you grow?
  • Who is backing the program and what did they invest to make it happen? Do they get involved in the companies they invest in or do they “spray and pray” with their investment?
  • What type of companies have been successful in the accelerator in the past? Who gets funding afterwards? Are the B2B or B2C, SaaS or something else, etc.

What is less important:

  • Demo Day: The rock show nature of Demo Days is not a good environment for investors but you need to take advantage of the intros and the social capital on offer to build those connects yourself.
  • Money: Funding amounts from the accelerator should not influence your decision to go there. Good companies will get funding, build a good company and spend as little as possible doing it.
  • Mentor walls: In Canada there is a relatively small pool of people with both time and capital but there are a lot of people that can help you move the needle in different ways.

Right away some might say that the above “less important” items are what builds momentum and if you look at the YC companies momentum being 3x that of TechStars then how can I say that is less important? These things have the greatest effect after the startup object is already in motion, in my opinion. The less important items are used all too often as *the* way to get the startup object moving.

A simple score card to find out who’s best for you

If a score card was set up to measure a program it should look something like this:

  • The program is located near companies that I am interested in working with
    • 1 – none that I know of
    • 3 – some interesting founders
    • 5 – who we would exit to and/our would like on our advisory board are within walking distance
  • Investors in successful companies that have been in the program are
    • 1 – Not involved in investments
    • 3 – one of 12 investors in the companies that graduate
    • 5 – take a board seat and/or a significant position in the financing round following completion of the program.
  • Companies that have been successful in the program in the past are
    • 1 – nothing like us, we are B2B SaaS and all the successful companies are gaming companies
    • 3 – some are similar to us, there is no particular pattern to the type of company
    • 5 – just like us, we are a hardware company and everyone that has done well post-program are hardware companies
  • Funding we receive from the accelerator program is enough to
    • 1 – we can go 6-12 months no problem, its great to not have to raise or find revenue right away
    • 3 – it is ok but in 6 months if we don’t have revenue or financing we are done.
    • 5 – we can pay rent while in the program but we have to move and stay lean to survive.

This is by no means research quality metrics but it does start to assign some way to weight rankings… for you. If I was going to score YC I would give them a 5, 3, 4, and 5 which would total at 17/20.

What else should be on this scorecard?

How does research spending in higher education translate to startups?

Standard

The issue of startup funding falling short in Canada is talked about in startup circles just as much as the weather in this country. This topic is something I have shared my opinion on before but that post was aimed at early stage companies. I am not sure if there really is a problem with funding or just with the companies in Canada that are at that stage. A more serious worry about this conversation is the rational that academic research (ergo the institutions that conduct them) are less important than VC investment in economic development:

“We’ve bought into the idea that academic research is the engine of economic development and that’s a fallacy,” says Dr. Patricia Lorenz, chair of NAO.

Canada spends roughly $11.3 Billion on Higher Ed based research. The top school on research spending is the University of Toronto with $915 Million of that. The next highest school is at $575 Million (UBC) then at #6 it drops to $325 Million.  This isn’t far off from US schools but there are a lot more schools with research spending over $100 Million. Alumni from ‘top schools’ in the US have received $12.5 Billion in funding across 559 deals since 2007. These are people that have been exposed directly or indirectly to the environment that is created around the research spending of those schools. We don’t have similar data in Canada (that I know of).

What were the Federal (government) research dollars spent at the “top schools” in the US?

  • Stanford University ($840 million)
  • Harvard University ($686 million)
  • University of California, Berkeley ($694 million)
  • New York University (I couldn’t find a number)
  • University of Pennsylvania ($770 million)
  • Massachusetts Institute of Technology ($677 million)

If you assume NYU is a bit above the average of the above in spending, that amounts to an annual of roughly $4.5 Billion in research spending at just 6 schools. Schools develop the talent that builds the companies that require the funding. Those are big number unless you contrast that with Canadian company R&D spending which is pegged at $10.9 Billion last year (I don’t know what amount of that goes to sponsored research in universities). RIM and Bombardier, top of that list, account for $1.54 and $1.34 Billion each.

In total, three times the research spending of the University of Toronto is going to closed research to aide mobile devices, snowmobiles, planes, and trains. I am not saying that is a bad thing at all. What I wanted to point out is that, relatively speaking, Universities really don’t spend that much on research factoring in the diversity of the research and the number of people that benefit from it directly or indirectly.

The persistant question that people tend to oversimplify, how much of that research spent at Universities translates into economic development? The answer to that depends on your metrics. Typically I think people point towards the commercialization results of a university. That is a only a part of the picture. The numbers above from 6 schools in the US that spend $22.5 Billion over five years ($4.5/yr) turned out students that raised $12.5 Billion in financing. Are you going to question any of those top US schools commercialization or their role in being an “engine of economic development?”

btw, ATI was founded in a dorm room at UofT in 1984 and exited for how many billions after changing the world of computers? Ebay? WattPad? Do we need to list them all? We probably do.

Why I like a Customer Advisory Board

Standard

Seeking out customer feedback and using it to build a great product is not a new concept. Great designers have been doing it for a long time as have great companies. The Lean Startup manual (or startup bible to many) talks about involving the customer while developing that Minimum Viable Product: “The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

Where that generally leads people is straight to building a simple application that might not be sexy in its design but it is functional or a landing page about a new product that might not exist yet. Using Google analytics and collecting email addresses along with some ‘conversion’ point becomes what you focus on. However, if you forget to talk to actually talk to customers as well you could be wasting a lot of time. Especially when you are moving passed your MVP or have a product that people are paying for.

Live and die by automated metrics

Sales numbers and in application metrics don’t give you the whole story.  Your sales numbers tell you that your marketing is working and your sales people are doing their jobs. Especially if sales are going up and to the right. What those numbers don’t tell you clearly enough is if people are finding the product they actually want or if it just simply close enough they want to give it a try. In application usage metrics might help identify if people love it or what part they love but it might put you in a situation that you have to react instead of being pro-active on product. By the time you know you might just have good marketing and sales people and the product is off the mark based on people not really using the product (it gets obvious when your churn rate climbs) you loose momentum.

Momentum is so very important for a startup or any business for that matter. If people aren’t coming back for more, it is a big problem for most companies. Talking to customers regularly helps you understand the metrics better but taking customer feedback and translating it to product effectively eludes many because it is an art, not a science.

I like the ‘old fashioned’ process of drawing out a prototype, getting time with who you think your customer is, and spending time listening to them along with answering their questions. Automation can come later. If I want to collect the most information with the least effort I want paper prototypes and discussions with customers. I can emphasize enough how important that first highly talkative but very loosely connected (socially) customer is (the Alpha customer).

Once you have built something you can track all the metrics you want but I would argue you will not get good insight on your customer with automated measurements alone. Having sat in enough usability studies over the years I don’t think how people use something represents their interest in using that something, most of the time. By sitting down and talking to customers you build a much better persona in your mind of who your customers are and what those automated numbers actually mean.

Going back to the customer regularly

If you want customer feedback to be enshrined into the organizational culture you need to put a little bit of a formal process around it. The established way to do this is by setting up a ‘Customer Advisory Board‘ made up of your most feedback giving customers. You can all it something else if you would like but don’t leave out the basics.

  • Treat the customers in this group with the same respect you treat your Board of Directors. They are volunteering their time and providing invaluable feedback.
  • Be as open and transparent with this group as possible. Create an environment where they aren’t afraid to offend you about your product and me totally open with them about future product direction. If you feel they must sign a Non-Disclosure agreement to be comfortable with being open, ask them to do so.

How it works is really simple. You have a scheduled meeting at a regular interval, use your favorite screen sharing conference application (Google hangouts work really well btw), and start listening to your customers. Take lots of notes, write a summary of the meeting, and share it with the board members. You can use that to keep the conversation going via email between meetings. From there you can decide what level of participation and engagement works for you. It is a process to start using a Customer Advisory Board well so ease into it and enjoy it.

The advantage to setting up something a bit more formal is that you are making a commitment to use it. The timing on doing this for a startup could be on day 1 as you build out your MVP and if you are a company with a product that is selling now but don’t have a customer feedback process in place, you should think about doing it.

Meaningful metrics for incubators and accelerators

Standard

Incubators and accelerators are businesses just like the businesses they intend to help develop as they travel through the startup lifecycle. As with any business, there are indicators that they can measure to give them a better idea of how they are performing besides the big public relations buzz around a company being funded.

You need to measure these numbers so that when a success happens you can hopefully gain some insights on how to help the other companies better. The problem is that even though the model of an incubator or accelerator is generally known, how to take 10 companies and have 10 successful growth companies come out the other side of the program is not.

The issue of what metrics to use is an important but complicated problem to solve.

Set the baseline at the application process (pre-program)

There are far more applicants than slots offered in an incubator or accelerator program. However, it is at this point that a program is gathering it’s best intelligence. You need a baseline measurement at the start of the program that you can measure every team against. What you should be tracking:

  • Who applied to the program that you didn’taccept (this is your control sample)
    • Track their progress on Angellist, Crunchbase, and/or go back to their web site in 3, 6, 12 months.
    • Keep a ratio of who is still in business and what their status is.
  • Maintain, in a CRM system, information on the applicant founders and their team members.

Measure the incubator/accelerator clients (in-program)

At this point there are X number of startups with Y number of founders and maybe Z employees. What you want to measure are things that demonstrate they have improved (or not) and which are things you would expect to see improve as a result of the services provided by any incubator or accelerator:

  • Current customers and revenue per customer (for most that will be 0 at the start) that will work across revenue models: CAC, ARPU, churn rate.
  • Sales funnel – do they have leads? How many? Are they qualified leads? What are they worth?
  • Average user growth in the last month.
  • What mentors or advisors did they meet through the program? What role did they take with the company?

Run these numbers at the start and at the end of the program. If you are a pure research focused incubator, ignore this section. You have a much longer time to see success – but few are truly research focused.

Monitor the graduates: Alumni (post-program)

This is a very important thing an incubator/accelerator can do — build and maintain its alumni connections. These folks not only help at every stage of running future programs but their success lifts the profile of the program, just like how alumni of prestigious business schools make the business schools prestigious.

There should be reporting milestones at a set interval (probably financial quarter based) where you gain the following insights on the company:

  • Customer growth percentage: CAC, ARPU, and churn rate all expressed as percentage growth.
  • Sales funnel growth expressed as a percentage.
  • Average user growth in the last month.
  • What mentors or advisors are currently active with the company?

Ideally you should have a position that is equivalent to a close advisor or board observer with the company once it graduates from the program.

Defining success

If an incubator or accelerator program is successful, the graphs should be heading up and to the right at a much faster pace than they would have been had startups not entered the program.

The only baseline data I know of is from the Startup Genome. In their report they explain the stages and the average length of time it takes a company to go through them. For an incubator or accelerator to demonstrate that they work, I would expect a successful company to move through the stages faster than the average. I would also expect them to fail faster than the average.

Tracking metrics puts a lot more overhead on an accelerator. It is likely more than they budgeted for to start. However, if you want to know if the program is successful it is worth the investment of an admin salary to track and crunch data. This is just a baseline, track more and figure out what the indicators of success are for you.