Bilski Redux and Why You Shouldn’t Believe Everything You Read

The Bilski decision came down yesterday and I’m still in a state of complete denial.  Basically, the court punted on the difficult issues and while denying Bilski his patent, they didn’t do anything to help the horrible state of the patent ecosystem that we have today. 

(For a great summary of the case, check out the Groklaw summary). 

To make my stomach even more upset, today I was alerted to an article authored by Ted Sichelman entitled “Why Bilski Benefits Startup Companies.”

In short, Sichelman points to a study that he was involved with and tries to come to the conclusion that these types of patents are good for startups. 

To quote him:

“in a recent survey of startup firms, the Berkeley Patent Survey—which I conducted with Robert Merges and Pamela Samuelson of UC Berkeley School of Law and Stuart Graham (now Chief Economist at the PTO)—startup executives reported that nearly 70% of venture capital firms and 50% of angel investors said that patents were important to their investment decisions.”

While I vehemently disagree with the article, what I found most interesting was a commenter who used a prior post that I wrote on why the study that Shichelman was involved in may be flawed.

Sichelman attempts to refute my post in the comment section, but fails badly.

First of all, it seems clear to me that Sichelman has intuitions on patents based on his experiences and has used the data to fit his theories, rather than using the data in an unbiased way to figure out what is really going on with patents and startups.

I make this assertion based on a couple of observations:

1. Everytime he speaks about patents, he begins with the story of his one experience with a startup company and how patents may have helped.  I’ve had dinner with Ted and I’ve heard the story.  I’ve also seen the story pop up in every situation he discusses patents.  A sample size of one does not make a scientific set. 

2. Sichelman’s co-authors are no where to be found when he comes up with his conclusions.  Ted acknowledges that he doesn’t speak for his co-authors, but very easily uses the word “we” when discussing the study and “his” conclusions.  The blog post that I wrote refuting some parts of the conclusions of the study were not all my own ideas – they were the thoughts of his co-author Pam Samuelson who herself said the article really doesn’t say anything about VC attitudes toward patents.

It’s really clear that Sichelman has a bias that was probably preconceived on a data set of one (his startup) and not supported by his fellow authors who have not backed him up publically.

Furthermore if you read his comments on my blog post, his rebuttals don’t hold water as well.  (And you’ll want to read the comments for this part of this post to make any sense).

1. Response rates – just because you are the most comprehensive study doesn’t make the study necessarily any better.  It might, it might not.  I could be the world’s tallest midget and that still doesn’t get me much (no offense to midgets, sincerely).  I never definitively said the sample size was too low, rather it’s not rock solid clear that it was the right size or targeted the right companies.  It’s not an easy thing for them to do, granted, but we shouldn’t just accept the number “1300” being thrown out and assume that this is sufficient.  And per Sichelman’s own admission in his comments, only about 175 of the respondents were VC-backed startup companies.   This is not a large number.

2.  Only 75% answered the patent question and Sichelman says this is acceptable.  This is not.  In fact, others involved with the study have specifically questioned where the answer rate was a piece of data in itself.  Again, I’m not saying definitively this is data, rather the way Sichelman uses data like this as “proof” is not dispositive. 

3. Results biased toward non-venture backed companies.  Sichelman again presents a non-compelling argument.  First, 2/3rd of the sample size, according to his co-author Pam Samuelson were D&B companies, not VentureExpert companies.  Secondly, him trying to convince readers that I only have a sample size of 25 current portfolios is either poor research on his part about me, or ignoring the facts.  I’ve been involved in VC for over a decade and with well over 250 companies, which alone is larger than his sample size of 175 companies.

4. (My Favorite) – Just because we didn’t survey VCs doesn’t mean that we don’t know what VCs think.  To quote him:

“VCs were not surveyed directly – Although it would have been more reliable to survey VCs directly, unfortunately, our time and resources were limited. Nonetheless, there is little reason to believe that the reports of executives at startup firms regarding the views of VCs during the financing process—which is lengthy and involved—are inaccurate. Rather, executives are presumably well-aware of those items that VCs found important during due diligence.

Basically his response is:  “we couldn’t afford to interview VCs, so we just guessed by asking entrepreneurs.”  This is totally bogus and backed up by Pam Samuelson herself in recent remarks at the University of Colorado law school.  This only talks about perceptions that entrepreneurs have of VCs.  This says nothing about what VCs think.  To think that one study group can be substituted for another study group and presented as fact discredits the valid parts of the paper.  This is just bad science.  If it was good science, we’d just ask parents about what their kids really thought about things. 

In summary, it’s been a rough day thinking about what could have been with Bilski.  I’m getting a ton of backchannel about the politics behind the decision, which just makes me more upset.  To try to capitalize on the poor decision with articles like this just makes me more disappointed about the system and the supposed “experts” who pretend to know much more than they really do.