Sunday, July 24, 2011

on knowing when to quit

Parts of science are extremely frustrating in that almost nothing works the way you want on the first try. Many assays and techniques need optimizing and multiple confirmation steps before they can yield decent data. But how do you know when to abandon something? Or is it ok to simply keep beating your head against a wall?

A good example would be in 2009, when I was trying to clone an antagonist to receptor X; I spent July to November cloning, and when I finally got a clone, I found I couldn't make cells produce mass quantities of this antagonist. Which meant I had to either reclone it, or optimize production methods. It turns out altering production methods didn't work, so I had to reclone it into a different plasmid and test for production again. By the time all this was done, it was March 2010, and I didn't actually get the antagonist tested in cell culture until June 2010.

My adventures in cloning this antagonist isn't the only one in our lab. My predecessor had done the original mutagenesis and cloning of this protein and found that he couldn't get the cells to produce it. When I took over the project, I cut off some stuff from his original sequence and cloned it into a different plasmid, but it didn't seem to make a difference, so I had to keep cloning until I got one that worked.

There are different ways to block cell signaling, but I can tell that my advisor thought it would be useful to have a receptor antagonist as a possible tool for future studies. But is the time spend on cloning and testing (in this case, more than two years) worth it? How much money and time needs to be spent in order to get this one tool?

A second example involves my time working as an undergrad in the plant lab downstairs. My job was to optimize primers that came in; since we were working with plant DNA, PCR could be a bit tricky since plant cells have all this glop (resins, tannins) that interferes with the yield of good, clean, DNA. The problem here was that I was even more clueless back then, so I just did what I was told, and couldn't explain the strange bands I kept getting on my DNA gel. So I had to try different temperature and time settings to get the primers to amplify the sequence of interest. Someone later asked me "why don't you just get new primers? They're pretty cheap."

After I left, the lab did get new (redesigned) primers. I talked to the lab manager after I started grad school, and she told me that someone else designed the old primers, and that their design had a lot of mistakes. But why couldn't we have checked them for mistakes earlier?

It sounds like many of the reasons for stopping or not stopping with a given project are financially motivated, but it can't be the only reason. Clearly the benefit of having a functional receptor antagonist was worth the time and funds spent to clone and test it. But what if the tests showed spotty results, as in sometimes it worked well and other times it didn't? Would we have abandoned that project?

Might be another thing that comes with more time in school...