Same Again Please

 

It’s often said that every family has its secret — Uncle Fred’s fondness for the horses, Cousin Bertha’s promiscuity, etc. — whatever it is that ‘we don’t talk about.’ If that’s true the scientific community is no exception. For us the unutterable is reproducibility — meaning you’ve done an experiment, new in some way, but the key questions are: ‘Can you do it again with the same result?’ and, even more important: ‘Can someone else repeat it?’

Once upon a time in my lab we had a standing joke: whoever came bounding along shouting about a new result would be asked ‘How reproducible is it?’ Reply: ‘100%!’ Next question: ‘How often have you done the experiment?’ Reply: ‘Once!!’ Boom, boom!!!

Not a rib-tickler but it did point to the knottiest problem in biological science namely that, when you start tinkering with living systems, you’re never fully in control.

How big is the problem?

But, as dear old Bob once put it, The Times They Are a-Changin’. Our problem was highlighted in the cancer field by the Californian biotechnology company Amgen who announced in 2012 that, over a 10 year period, they’d selected 53 ‘landmark’ cancer papers — and failed to replicate 47 of them! Around the same time a study by Bayer HealthCare found that only about one in four of published studies on potential drug targets was sufficiently strong to be worth following up.

More recently the leading science journal Nature found that almost three quarters of over 1,500 research scientists surveyed had tried to replicate someone else’s experiment and failed. It gets worse! More than half of them owned up to having failed to repeat one of their own experiments! Hooray! We have a result!! If you can’t repeat your own experiment either you’re sloppy (i.e., you haven’t done exactly what you did the first time) or you’ve highlighted the biological variability in the system you’re studying.

If you want an example of biological variation you need look no further than human conception and live births. Somewhere in excess of 50% of fertilized human eggs don’t make it to birth. In other words, if you do a ‘thought experiment’ in which a group of women carry some sort of gadget that flags when one of their eggs is fertilized, only between one in two and one in five of those ‘flagged’ will actually produce an offspring.

However you look at it, whether it’s biological variation, incompetence or plain fraud, we have a problem and Nature’s survey revealed that, to their credit, the majority of scientists agreed that there was a ‘significant crisis.’

The results of the survey by Nature from Baker 2016.

Predictably, but disturbingly for us in the biomedical fields, the greatest confidence in published results was shown by the chemists and physicists whereas only 50% of data in medicine were thought to be reproducible. Oh dear!

Tackling the problem in cancer

The Reproducibility Project: Cancer Biology, launched in 2013, is a collaboration between the Center for Open Science and Science Exchange.

The idea was to take 50 cancer papers published in leading journals and to attempt to replicate their key findings in the most rigorous manner. The number was reduced from 50 to 29 papers due to financial constraints and other factors but the aim remains to find out what affects the robustness of experimental results in preclinical cancer research.

It is a formidable project. Before even starting an experiment, the replication teams devised detailed plans, based on the original reports and, as the result of many hours effort, came up with a strategy that both they and the original experimenters considered was the best they could carry out. The protocols were then peer reviewed and the replication plans were published before the studies began.

Just to give an idea of the effort involved, a typical replication plan comprises many pages of detailed protocols describing reagents, cells and (where appropriate) animals to be used, statistical analysis and any other relevant items, as well as incorporating the input from referees.

The whole endeavor is, in short, a demonstration of scientific practice at its best.

To date ten of these replication studies have been published.

How are we doing?

The critical numbers are that 6 of the 10 replications ‘substantially reproduced’ the original findings, although in 4 of these some results could not be replicated. In 4 of the 10 replications the original findings were not reproduced.

The first thing to say is that a 60% rate of ‘substantial’ successful replication is a major improvement on the 11% to 25% obtained by the biotech companies. The most obvious explanation is that the massive, collaborative effort to tighten up the experimental procedures paid dividends.

The second point to note is that even when a replication attempt fails it cannot be concluded that the original data were wrong. The discrepancy may merely have highlighted how fiendishly tricky biological experimentation can be. The problem is that with living systems, be they cells or animals, you never have complete control. Ask anyone who has a cat.

More likely, however, than biological variation as a cause of discrepancies between experiments is human variation, aka personal bias.

This may come as a surprise to some but, rather than being ‘black and white’ much of scientific interpretation is subjective. Try as I might, can I be sure that in, say, counting stained cells I don’t include some marginal ones because that fits my model? OK: the solution to that is get someone else to do the count ‘blind’ — but I suspect that quite often that’s not done. However, there are even trickier matters. I do half a dozen repeats of an experiment and one gives an odd result (i.e., differs from the other five). Only I can really go through everything involved (from length of coffee breaks to changes in reagent stocks) and decide if there are strong enough grounds to ignore it. I do my best to avoid personal bias but … scientists are only human (fact!).

A closer look at failure

One of the failed replications is a particularly useful illustration for this blog. The replication study tackled a 2012 report that bacterial infection (specifically a bacterium, Fusobacterium nucleatum, that occurs naturally in the human oral cavity) is present in human colon cancers but not in non-cancerous colon tissues. It hit the rocks. They couldn’t detect F. nucleatum in most tumour samples and, when they did, the number of bugs was not significantly different to that in adjacent normal tissue.

Quite by chance, a few months ago, I described some more recent research into this topic in Hitchhiker or Driver?

I thought this was interesting because it showed that not only was F. nucleatum part of the microbiome of bowel cancer but that when tumour cells spread to distant sites (i.e., underwent metastasis) the bugs went along for the ride — raising the key question of whether they actually helped the critical event of metastasis.

So this latest study was consistent with the earlier result and extended it — indeed they actually showed that antibiotic treatment to kill the bugs slowed the growth of human tumour cells in mice.

Where does that leave us?

Well, helpfully, the Reproducibility Project also solicits comments from independent experts to help us make sense of what’s going on. Step forward Cynthia Sears of The Johns Hopkins Hospital. She takes the view that, although the Replication Study didn’t reproduce the original results, the fact that numerous studies have already found an association between F. nucleatum and human colon cancer means there probably is one — consistent with the work described in Hitchhiker or Driver?

One possible explanation for the discrepancy is that the original report studied colon tissue pairs (i.e., tumour and tumour-adjacent tissues) from colon cancer patients but did not report possibly relevant factors like age, sex and ethnicity of patients. In contrast, the replication effort included samples from patients with cancer (tumour and adjacent tissue) and non-diseased control tissue samples from age, sex and ethnicity matched individuals.

So we now know, as Dr. Sears helpfully remarks, that the association between F. nucleatum bugs and human colon cancer is more complicated first appeared! Mmm. And, just in case you were in any doubt, she points out that we need to know more about the who (which Fusobacterium species: there are 12 of them known), the where (where in the colon, where in the world) and the how (the disease mechanisms).

Can we do better?

In the light of all that the obvious question is: what can we do about the number of pre-clinical studies that are difficult if not impossible to reproduce? Answer, I think: not much. Rather than defeatist this seems to me a realistic response. There’s no way we could put in place the rigorous scrutiny of the Reproducibility Project across even a fraction of cancer research projects. The best we can do is make researchers as aware as possible of the problems and encourage them towards the very best practices — and assume that, in the end, the solid results will emerge and the rest will fall by the wayside.

Looking at the sharp end, it’s worth noting that, if you accept that some of the variability in pre-clinical experiments is down to the biological variation we mentioned above, it would at least be consistent with the wide range of patient responses to some cancer treatments. The reason for that, as Cynthia Sears didn’t quite put it, is that we just don’t know enough about how the humans we’re tinkering with actually work.

References

Baker, M. (2016). Is There a Reproducibility Crisis? Nature 533, 452-454.

Jarvis, G.E. (2017). Early embryo mortality in natural human reproduction: What the data say [version 2; referees: 1 approved, 2 approved with reservations] F1000Research 2017, 5:2765 (doi: 10.12688/f1000research.8937.2).

Monya Baker & Elie Dolgin (2017). Cancer reproducibility project releases first results. Nature 541, 269–270. doi:10.1038/541269a.

Begley, C.G. and Ellis, L.M. (2012). Drug development: Raise standards for preclinical cancer research. Nature 483, 531–533.

Prinz,F., Schlange,T. and Asadullah, K. (2011). Believe it or not: how much can we rely on published data on potential drug targets? NatureRev. Drug Discov. 10, 712.

Advertisements

Caveat emptor

 

It must be unprecedented for publication of a scientific research paper to make a big impact on a significant sector of the stock market. But, in these days of ‘spin-off’ companies and the promise of unimaginable riches from the application of molecular biology to every facet of medicine and biology, perhaps it was only a matter of time. Well, the time came with a bang this June when the journal Nature Medicine published two papers from different groups describing essentially the same findings. Result: three companies (CRISPR Therapeutics, Editas Medicine and Intellia) lost about 10% of their stock market value.

I should say that a former student of mine, Anthony Davies, who runs the Californian company Dark Horse Consulting Inc., mentioned these papers to me before I’d spotted them.

What on earth had they found that so scared the punters?

Well, they’d looked in some detail at CRISPR/Cas9, a method for specifically altering genes within organisms (that we described in Re-writing the Manual of Life).

Over the last five years it’s become the most widely used form of gene editing (see, e.g., Seeing a New World and Making Movies in DNA) and, as one of the hottest potatoes in science, the subject of fierce feuding over legal rights, who did what and who’s going to get a Nobel Prize. Yes, scientists do squabbling as well as anyone when the stakes are high.

Nifty though CRISPR/Cas9 is, it has not worked well in stem cells — these are the cells that can keep on making more of themselves and can turn themselves in other types of cell (i.e., differentiate — which is why they’re sometimes called pluripotent stem cells). And that’s a bit of a stumbling block because, if you want to correct a genetic disease by replacing a defective gene with one that’s OK, stem cells are a very attractive target.

Robert Ihry and colleagues at the Novartis Institutes for Biomedical Research got over this problem by modifying the Cas9 DNA construct so that it was incorporated into over 80% of stem cells and, moreover, they could switch it on by the addition of a drug. Turning on the enzyme Cas9 to make double-strand breaks in DNA in such a high proportion of cells revealed very clearly that this killed most of them.

When cells start dying the prime suspect is always P53, a so-called tumour suppressor gene, switched on in response to DNA damage. The p53 protein can activate a programme of cell suicide if the DNA cannot be adequately repaired, thereby preventing the propagation of mutations and the development of cancer. Sure enough, Ihry et al. showed that in stem cells a single cut is enough to turn on P53 — in other words, these cells are extremely sensitive to DNA damage.

Gene editing by Cas9 turns on P53 expression. Left: control cells with no activation of double strand DNA breaks; right: P53 expression (green fluorescence) several days after switching on expression of the Cas9 enzyme. Scale bar = 100 micrometers. From Ihry et al., 2018.

In a corresponding study Emma Haapaniemi and colleagues from the Karolinska Institute and the University of Cambridge, using a different type of cell (a mutated line that keeps on proliferating), showed that blocking P53 (hence preventing the damage response) improves the efficiency of genome editing. Good if you want precision genome editing by risky as it leaves the cell vulnerable to tumour-promoting mutations.

Time to buy?!

As ever, “Let the buyer beware” and this certainly isn’t a suggestion that you get on the line to your stockbroker. These results may have hit share prices but they really aren’t a surprise. What would you expect when you charge uninvited into a cell with a molecular bomb — albeit one as smart as CRISPR/Cas9. The cell responds to the DNA damage as it’s evolved to do — and we’ve known for a long time that P53 activation is exquisitely sensitive: one double-strand break in DNA is enough to turn it on. If the damage can’t be repaired P53’s job is to drive the cell to suicide — a perfect system to prevent mutations accumulating that might lead to cancer. The high sensitivity of stem cells may have evolved because they can develop into every type of cell — thus any fault could be very serious for the organism.

It’s nearly 40 years since P53 was discovered but for all the effort (over 45,000 research papers with P53 in the title) we’re still remarkably ignorant of how this “Guardian of the Genome” really works. By comparison gene editing, and CRISPR/Cas9 in particular, is in its infancy. It’s a wonderful technique and it may yet be possible to get round the problem of the DNA damage response. It may even turn out that DNA can be edited without making double strand breaks.

So maybe don’t rush to buy gene therapy shares — or to sell them. As the Harvard geneticist George Church put it “The stock market isn’t a reflection of the future.” Mind you, as a founder of Editas Medicine he’d certainly hope not.

References

Ihry, R.J. et al. (2018). p53 inhibits CRISPR–Cas9 engineering in human pluripotent stem cells. Nature Medicine, 1–8.

Haapaniemi, E. et al. (2018). CRISPR–Cas9 genome editing induces a p53-mediated DNA damage response. Nature Medicine (2018) 11 June 2018.

Fantastic stuff

 

It certainly is for Judy Perkins, a lady from Florida, who is the subject of a research paper published last week in the journal Nature Medicine by Nikolaos Zacharakis, Steven Rosenberg and their colleagues at the National Cancer Institute in Bethesda, Maryland. Having reached a point where she was enduring pain and facing death from metastatic breast cancer, the paper notes that she has undergone “complete durable regression … now ongoing for over 22 months.”  Wow! Hard to even begin to imagine how she must feel — or, for that matter, the team that engineered this outcome.

How was it done?

Well, it’s a very good example of what I do tend to go on about in these pages — namely that science is almost never about ‘ground-breaking breakthroughs’ or ‘Eureka’ moments. It creeps along in tiny steps, sideways, backwards and sometimes even forwards.

You may recall that in Self Help – Part 2, talking about ‘personalized medicine’, we described how in one version of cancer immunotherapy a sample of a patient’s white blood cells (T lymphocytes) is grown in the lab. This is a way of either getting more immune cells that can target the patient’s tumour or of being able to modify the cells by genetic engineering. One approach is to engineer cells to make receptors on their surface that target them to the tumour cell surface. Put these cells back into the patient and, with luck, you get better tumour cell killing.

An extra step (Gosh! Wonderful GOSH) enabled novel genes to be engineered into the white cells.

The Shape of Things to Come? took a further small step when DNA sequencing was used to identify mutations that gave rise to new proteins in tumour cells (called tumour-associated antigens or ‘neoantigens’ — molecular flags on the cell surface that can provoke an immune response – i.e., the host makes antibody proteins that react with (stick to) the antigens). Charlie Swanton and his colleagues from University College London and Cancer Research UK used this method for two samples of lung cancer, growing them in the lab to expand the population and testing how good these tumour-infiltrating cells were at recognizing the abnormal proteins (neo-antigens) on cancer cells.

Now Zacharakis & Friends followed this lead: they sequenced DNA from the tumour tissue to pinpoint the main mutations and screened the immune cells they’d grown in the lab to find which sub-populations were best at attacking the tumour cells. Expand those cells, infuse into the patient and keep your fingers crossed.

Adoptive cell transfer. This is the scheme from Self Help – Part 2 with the extra step (A) of sequencing the breast tumour. Four mutant proteins were found and tumour-infiltrating lymphocytes reactive against these mutant versions were identified, expanded in culture and infused into the patient.

 

What’s next?

The last step with the fingers was important because there’s almost always an element of luck in these things. For example, a patient may not make enough T lymphocytes to obtain an effective inoculum. But, regardless of the limitations, it’s what scientists call ‘proof-of-principle’. If it works once it’ll work again. It’s just a matter of slogging away at the fine details.

For Judy Perkins, of course, it’s about getting on with a life she’d prepared to leave — and perhaps, once in while, glancing in awe at a Nature Medicine paper that does not mention her by name but secures her own little niche in the history of cancer therapy.

References

McGranahan et al. (2016). Clonal neoantigens elicit T cell immunoreactivity and sensitivity to immune checkpoint blockade. Science 10.1126/science.aaf490 (2016).

Zacharakis, N. et al. (2018). Immune recognition of somatic mutations leading to complete durable regression in metastatic breast cancer. Nature Medicine 04 June 2018.

Now You See It

 

In the pages of this blog we’ve often highlighted the power of fluorescent tags to track molecules and see what they’re up to. It’s a method largely pioneered by the late Roger Tsien and it has revolutionized cell biology over the last 20 years.

In parallel with molecular tagging has come genetic engineering that permits novel genes, usually carried by viruses, to be introduced to cells and animals. As we saw in Gosh! Wonderful GOSH and Blowing Up Cancer, various ‘virotherapy’ approaches have been used with some success to treat leukemias and skin cancers and a trial is underway in China treating metastatic non-small cell lung cancer.

A major aim of genetic engineering is to be able to control the expression of novel genes (i.e. protein production from the encoding DNA sequence) that have been introduced into an animal — in the jargon, to ‘switch’ on or off at will. That can be done but only by administering a drug or some other regulator, either in drinking water, by injection or squirting directly into the lungs. An ideal would be something that’s more controlled and less invasive. How about shining a light on the relevant spot?!

Wacky or what?

That may sound as though we’re veering towards science fiction but reflect for a moment that every animal with vision, however rudimentary, sees by transforming light entering the eyes into electrical signals that the brain turns into a picture of the world around them. This relies on photoreceptor proteins that span the membranes of retinal cells.

How vision works. Light passes through the lens and falls on the retina at the back of the eye. The photoreceptor cells it activates are rod cells (that respond to low light levels — there’s about 100 million of them) and cone cells (stimulated by bright light). Sitting across the membranes of these cells are photoreceptor proteins — rhodopsin in rods and photopsin in cones. Photoreceptor proteins change shape when light falls on them — the driver for this being a small chemical attached to the proteins called retinal, one of the many forms of vitamin A. This shape change allows the proteins to ‘talk’ to the inside of the cell, i.e. to interact with other proteins to switch on enzymes and change the level of ions (sodium and calcium). The upshot is that the signal is passed through neural cells in the optic nerve to the brain where the incoming light signals are processed into the images that we perceive.

The seemingly far-fetched notion of controlling genes by light was floated by Francis Crick in 1999. The field was launched in 2002 by Boris Zemelman and Gero Miesenböck who engineered neurons to express one form of rhodopsin. This gave birth to the subject of optogenetics — using light to control cells in living tissues that have been genetically modified to express light-sensitive ion channels such as rhodopsin. By 2010 optogenetics had advanced to being the ‘Method of the Year’ according to the research journal Nature Methods.

Dropping like flies

One of the most dramatic demonstrations of the power of optogenetics has come from Robert Kittel and colleagues in Würzburg and Göttingen who made a mutant form of a protein called channelrhodopsin-1 (found in green algae) and expressed it in fruit flies (Drosophila melanogaster). The mutant protein (ChR2-XXL) carries very large photocurrents of ions (critically sodium and calcium) with the result that photostimulation can drastically change the behaviour of freely moving flies.

Light-induced stimulation of motor neurons in adult flies expressing a mutant form of rhodopsin ChR2-XXL. Click to run movie.

Left hand tube: Activation of ChR2-XXL in motor neurons with white light LEDs caused reversible immobilization of adult flies. In contrast (right hand tube) flies expressing normal (wild-type) channelrhodopsin-2 showed no response. From Dawydow et al., 2014.

Other optogenetic experiments on flies can be viewed on You Tube, e.g., the TED talk of Gero Miesenböck and the Manchester Fly Facility video of fly maggots, engineered to have a channel protein (channelrhodopsin) in their neurons, responding to blue light.

Of flies … and mice … and men

This is stunning science and it’s opened a new vista in neurobiology. But what about the things we’re concerned with in these pages — treating diseases like diabetes and cancer?

Scheme showing how genetic engineering can make the release of insulin from cells controllable by light. Normally cells of the pancreas (beta cells) take up glucose when its level in the circulation rises (via a glucose transporter protein). The rise in glucose triggers ATP production in the cell. This in turn causes potassium channels in the membrane to close (called depolarization) and this opens calcium channels. The increase in calcium in the cell drives insulin secretion. From Kushibiki et al., 2015.

The left-hand scheme above shows how glucose triggers the pancreas to produce the hormone insulin. Diabetes occurs when either the pancreas doesn’t make enough insulin or when cells of the body don’t respond properly to insulin by taking up glucose.

As a first step to see whether optogenetic regulation of calcium levels in pancreatic cells could trigger insulin release, Toshihiro Kushibiki and colleagues at the National Defense Medical College in Saitama, Japan engineered the channelrhodopsin-1 protein into mouse cells and hit them with laser light of the appropriate frequency. An hour after a short burst of light (a few seconds) the insulin levels had doubled.

The photo below shows a clump of these cells: the nuclei are blue and the channel protein (yellow) can be seen sitting across the cell membranes.

 

Cells expressing a fluorescently tagged channelrhodopsin protein (yellow). Nuclei are blue. From Kushibiki et al., 2015.

 

 

To show that this could work in animals they suspended the engineered cells in a gel and inoculated blobs of the goo under the skin of diabetic mice. Laser burst again: blood glucose levels fell and they showed this was due to the irradiated, implanted cells producing insulin.

Fast forward three years

Those brilliant results highlighted the potential of optogenetic technology as a completely novel approach to a disease that afflicts over 300 million people worldwide.

Scheme showing a Smartphone can be used to regulate the release of insulin from engineered cells implanted in a mouse with diabetes. The key events in the cell are that the light-activated receptor turns on an enzyme (BphS) that in turn controls a transcription regulator (FRTA) that binds to a DNA construct to switch on the Gene Of Interest (GOI) — in this case encoding insulin. (shGLP1, short human glucagon-like peptide 1, is a hormone that has the opposite effect to insulin). From Shao et al., 2017.

In a remarkable confluence of technologies Jiawei Shao and colleagues from a number of institutes in Shanghai, including the Shanghai Academy of Spaceflight Technology, and from ETH Zürich have recently published work that takes the application of optogenetics well and truly into the twenty-first century.

They figured that, as these days nearly everyone lives with their smartphone, the world could use a diabetes app. Essentially they designed a home server SmartController to process wireless signals so that a smartphone could control insulin production by cells in gel capsules implanted in mice. There are differences in the genetic engineering of these cells from those used by Kushibiki’s group but the critical point is unchanged: laser light stimulates insulin release. The capsules carry wirelessly powered LEDs.

The only other thing needed is to know glucose levels. Because mice are only little and they’ve already got their gel capsule, rather than implanting a monitor they took a drop of blood from the tail and used a glucometer. However, looking ahead to human applications, continuous glucose monitors are now available that, placed under the skin, can transmit a radio signal to the controller and, ultimately, it will be possible for the gel capsules to have a built-in battery plus glucose sensor and the whole thing could work automatically.

Any chance of illuminating cancer?

This science is so breathtaking it seems cheeky to ask but, well, I’d say ‘yes but not just yet.’ So long as the ‘drug’ you wish to use can be made biologically (i.e. from DNA by the machinery of the cell), rather than by chemical synthesis, Shao’s Smartphone set-up can readily be adapted to deliver anti-cancer drugs. This might be hugely preferable to the procedures currently in use and would offer an additional advantage by administering drugs in short bursts of lower concentration — a regimen that in some mouse cancer models at least is more effective.

References

Dawydow, A., Kittel, R.J. et al., 2014. Channelrhodopsin-2–XXL, a powerful optogenetic tool for low-light applications. PNAS 111, 13972-13977.

Kushibiki et al., (2015). Optogenetic control of insulin secretion by pancreatic beta-cells in vitro and in vivo. Gene Therapy 22, 553-559.

Shao, J. et al., 2017. Smartphone-controlled optogenetically engineered cells enable semiautomatic glucose homeostasis in diabetic mice. Science Translational Medicine 9, Issue 387, eaal2298.

Another Fine Mess

 

Did you guess from the title that this short piece is about the seeming inability of the British Government to run well, most things but especially IT programmes? Of course you did! Provoked by the latest National Health Service furore. In case you’ve been away with the fairies for a bit, a major cock-up in its computer system has just come to light whereby, between 2009 and 2018, it failed to invite 450,000 women between the ages of 68 and 71 for breast screening. Secretary of State for Health, Jeremy Hunt (our man usually on hand with a can of gasoline when there’s a fire), told Parliament that “there may be between 135 and 270 women who had their lives shortened”. Cue: uproar, headlines: HUNDREDS of British women have died of breast cancer (Daily Express), etc.

Logo credit: Breast Cancer Action

I’ve been reluctant to join in because I’ve said all I think is worth saying about breast cancer screening in two earlier pieces (Risk Assessment and Behind the Screen). Reading them again I thought they were a reasonable summary and I don’t think there’s anything new to add. However, this is  a cancer blog and it’s a story that’s made big headlines so I feel honour-bound to offer a brief comment — in addition to sympathizing with the women and families who have been caused much distress.

My reaction was that Hunt was misguided in mentioning specific numbers — not only because he was asking for trouble from the press but mainly because the evidence that screening itself saves lives is highly questionable. For an expert view on this my Cambridge colleague David Spiegelhalter, who is Professor for the Public Understanding of Risk, has analysed the facts behind breast screening with characteristic clarity in the New Scientist.

Anything to add?

I was relieved on re-reading Risk Assessment to see that I’d given considerable coverage to the report that had just come out (2014) from The Swiss Medical Board.  They’d reviewed the history of mammography screening, concluded that systematic screening might prevent about one breast cancer death for every 1000 women screened, noted that there was no evidence that overall mortality was affected and pointed out that false positive test results presented the risk of overdiagnosis.

In the USA, for example, over a 10-year course of annual screening beginning at 50 years of age, one breast-cancer death would have been prevented whilst between 490 and 670 women would have had a false positive mammogram calling for a repeat examination, 70 to 100 an unnecessary biopsy and between 3 and 14 would have been diagnosed with a cancer that would never have become a problem.

Needless to say, this landed the Swiss Big Cheeses in very hot water because there’s an awful lot of vested interests in screening and it’s sort of instinctive that it must be a good thing. But what’s great about science is that you can do experiments — here actually analysing the results of screening programmes — and quite often the results turn to be completely unexpected, as it did in this case where the bottom line was that mammography does more harm than good.

This has led to the recommendation that the current programmes in Switzerland should be phased out and not replaced.

So we’re all agreed then?

Of course not. In England the NHS recommendation remains that women aged 50 to 70 are offered mammography every three years — which is just as well or we’d have Hunt explaining the recent debacle as new initiative. The American Cancer Society “strongly” recommends regular screening mammography starting at age 45 and the National Cancer Institute refers to “experts” that recommend mammography every year starting at age 25 for women with mutations in their BRCA1 or BRCA2 genes.

The latter is really incredible because a study published in the British Medical Journal in 2012 found that these mutations made the carriers much more vulnerable to radiation-induced cancer. Specifically, women with BRCA 1/2 mutations who were exposed to diagnostic radiation (i.e. mammography) before the age of 30 were twice as likely to develop breast cancer, compared to those with normal BRCA genes.

They are susceptible to radiation that would not normally be considered dangerous because the two BRCA genes encode proteins involved in the repair of damaged DNA — and if that is defective you have a recipe for cancer.

Extraordinary.

So it’s probably true that the only undisputed fact is that we need much better ways for detecting cancers at an early stage of development. The best hope at the moment seems to be the liquid biopsy approach we described in Seeing the Invisible: A Cancer Early Warning System? but that’s still a long way from solving a general cancer problem, well illustrated by breast mammography.

No It Isn’t!

 

It’s great that newspapers carry the number of science items they do but, as regular readers will know, there’s nothing like the typical cancer headline to get me squawking ‘No it isn’t!” Step forward The Independent with the latest: “Major breakthrough in cancer care … groundbreaking international collaboration …”

Let’s be clear: the subject usually is interesting. In this case it certainly is and it deserves better headlines.

So what has happened?

A big flurry of research papers has just emerged from a joint project of the National Cancer Institute and the National Human Genome Research Institute to make something called The Cancer Genome Atlas (TCGA). This massive initiative is, of course, an offspring of the Human Genome Project, the first full sequencing of the 3,000 million base-pairs of human DNA, completed in 2003. The intervening 15 years have seen a technical revolution, perhaps unparalled in the history of science, such that now genomes can be sequenced in an hour or two for a few hundred dollars. TCGA began in 2006 with the aim of providing a genetic data-base for three cancer types: lung, ovarian, and glioblastoma. Such was its success that it soon expanded to a vast, comprehensive dataset of more than 11,000 cases across 33 tumor types, describing the variety of molecular changes that drive the cancers. The upshot is now being called the Pan-Cancer Atlas — PanCan Atlas, for short.

What do we need to know?

Fortunately not much of the humungous amounts of detail but the scheme below gives an inkling of the scale of this wonderful endeavour — it’s from a short, very readable summary by Carolyn Hutter and Jean Claude Zenklusen.

TCGA by numbers. The scale of the effort and output from The Cancer Genome Atlas. From Hutter and Zenklusen, 2018.

The first point is obvious: sequencing 11,000 paired tumour and normal tissue samples produced mind-boggling masses of data. 2.5 petabytes, in fact. If you have to think twice about your gigas and teras, 1 PB = 1,000,000,000,000,000 B, i.e. 1015 B or 1000 terabytes. A PB is sometimes called, apparently, a quadrillion — and, as the scheme helpfully notes, you’d need over 200,000 DVDs to store it.

The 33 different tumour types included all the common cancers (breast, bowel, lung, prostate, etc.) and 10 rare types.

The figure of seven data types refers to the variety of information accumulated in these studies (e.g., mutations that affect genes, epigenetic changes (DNA methylation), RNA and protein expression, duplication or deletion of stretches of DNA (copy number variation), etc.

After which it’s worth pausing for a moment to contemplate the effort and organization involved in collecting 11,000 paired samples, sequencing them and analyzing the output. It’s true that sequencing itself is now fairly routine, but that’s still an awful lot of experiments. But think for even longer about what’s gone into making some kind of sense of the monstrous amount of data generated.

And it’s important because?

The findings confirm a trend that has begun to emerge over the last few years, namely that the classification of cancers is being redefined. Traditionally they have been grouped on the basis of the tissue of origin (breast, bowel, etc.) but this will gradually be replaced by genetic grouping, reflecting the fact that seemingly unrelated cancers can be driven by common pathways.

The most encouraging thing to come out of the genetic changes driving these tumours is that for about half of them potential treatments are already available. That’s quite a surprise but it doesn’t mean that hitting those targets will actually work as anti-cancer strategies. Nevertheless, it’s a cheering point that the output of this phenomenal project may, as one of the papers noted, serve as a launching pad for real benefit in the not too distant future.

What should science journalists do to stop upsetting me?

Read the papers they comment on rather than simply relying on press releases, never use the words ‘breakthrough’ or ‘groundbreaking’ and grasp the point that science proceeds in very small steps, not always forward, governed by available methods. This work is quite staggering for it is on a scale that is close to unimaginable and, in the end, it will lead to treatments that will affect the lives of almost everyone — but it is just another example of science doing what science does.

References

Hutter, C. and Zenklusen, J.C. (2018). The Cancer Genome Atlas: Creating Lasting Value beyond Its Data. Cell 173, 283–285.

Hoadley, K.A. et al. (2018). Cell-of-Origin Patterns Dominate the Molecular Classification of 10,000 Tumors from 33 Types of Cancer. Cell 173, 291–304.

Hoadley, K.A. et al. (2014). Multiplatform Analysis of 12 Cancer Types Reveals Molecular Classification within and across Tissues of Origin. Cell 158, 929–944.

John Sulston: Biologist, Geneticist and Guardian of our Heritage

 

Sir John Sulston died on 6 March 2018, an event reported world-wide by the press, radio and television. Having studied in Cambridge and then worked at the Salk Institute in La Jolla, California, he joined the Laboratory of Molecular Biology in Cambridge to investigate how genes control development and behaviour, using as a ‘model organism’ the roundworm Caenorhabditis elegans. This tiny creature, 1 mm long, was appealing because it is transparent and most adult worms are made up of precisely 959 cells. Simple it may be but this worm has all the bits required for to live, feed and reproduce (i.e. a gut, a nervous system, gonads, intestine, etc.). For his incredibly painstaking efforts in mapping from fertilized egg to mature animal how one cell becomes two, two becomes four and so on to complete the first ‘cell-lineage tree’ of a multicellular organism, Sulston shared the 2002 Nobel Prize in Physiology or Medicine with Bob Horvitz and Sydney Brenner.

Sir John Sulston

It became clear to Sulston that the picture of how genes control development could not be complete without the corresponding sequence of DNA, the genetic material. The worm genome is made up of 100 million base-pairs and in 1983 Sulston set out to sequence the whole thing, in collaboration with Robert Waterston, then at the University of Washington in St. Louis. This was a huge task with the technology available but their success indicated that the much greater prize of sequencing of the human genome — ten times as much DNA as in the worm — might be attainable.

In 1992 Sulston became head of a new sequencing facility, the Sanger Centre (now the Sanger Institute), in Hinxton, Cambridgeshire that was the British component of the Human Genome Project, one of the largest international scientific operations ever undertaken. Astonishingly, the complete human genome sequence, finished to a standard of 99.99% accuracy, was published in Nature in October 2004.

As the Human Genome Project gained momentum it found itself in competition with a private venture aimed at securing the sequence of human DNA for commercial profit — i.e., the research community would be charged for access to the data. Sulston was adamant that our genome belonged to us all and with Francis Collins — then head of the US National Human Genome Research Institute — he played a key role in establishing the principle of open access to such data, preventing the patenting of genes and ensuring that the human genome was placed in the public domain.

One clear statement of this intent was that, on entering the Sanger Centre, you were met by a continuously scrolling read-out of human DNA sequence as it emerged from the sequencers.

In collaboration with Georgina Ferry, Sulston wrote The Common Thread, a compelling account of an extraordinary project that has, arguably, had a greater impact than any other scientific endeavour.

For me and my family John’s death was a heavy blow. My wife, Jane, had worked closely with him since inception of the Sanger Centre and not only had his scientific influence been immense but he had also become a staunch friend and source of wisdom. At the invitation of John’s wife Daphne, a group of friends and relatives gathered at their house after the funeral. As darkness fell we went into the garden and once again it rang to the sound of chatter and laughter from young and old as we enjoyed one of John’s favourite party pastimes — making hot-air lanterns and launching them to drift, flickering to oblivion, across the Cambridgeshire countryside. John would have loved it and it was a perfect way to remember him.

Then …

When John Sulston set out to ‘map the worm’ the tools he used could not have been more basic: a microscope — with pencil and paper to sketch what he saw as the animal developed. His hundreds of drawings tracked the choreography of the worm to its final 959 cells and showed that, along the way, 131 cells die in a precisely orchestrated programme of cell death. The photomontage and sketch below are from his 1977 paper with Bob Horvitz and give some idea of the effort involved.

Photomontage of a microscope image (top) and (lower) sketch of the worm Caenorhabditis elegans showing cell nuclei. From Sulston and Horvitz, 1977.

 … and forty years on

It so happened that within a few days of John’s death Achim Trubiroha and colleagues at the Université Libre de Bruxelles published a remarkable piece of work that is really a descendant of his pioneering studies. They mapped the development of cells from egg fertilization to maturity in a much bigger animal than John’s worms — the zebrafish. They focused on one group of cells in the early embryo (the endoderm) that develop into various organs including the thyroid. Specificially they tracked the formation of the thyroid gland that sits at the front of the neck wrapped around part of the larynx and the windpipe (trachea). The thyroid can be affected by several diseases, e.g., hyperthyroidism, and in about 5% of people the thyroid enlarges to form a goitre — usually caused by iodine deficiency. It’s essential to determine the genes and signalling pathways that control thyroid development if we are to control these conditions.

For this mapping Trubiroha’s group used the CRISPR method of gene editing to mutate or knock out specific targets and to tag cells with fluorescent labels — that we described in Re-writing the Manual of Life.

A flavor of their results is given by the two sets of fluorescent images below. These show in real time the formation of the thyroid after egg fertilization and the effect of a drug that causes thyroid enlargement.

Live imaging of transgenic zebrafish to follow thyroid development in real-time (left). Arrows mark chord-like cell clusters that form hormone-secreting follicles (arrowheads) during normal development. The right hand three images show normal development (-) and goiter formation (+) induced by a drug. From Trubiroha et al. 2018.

John would have been thrilled by this wonderful work and, with a chuckle, I suspect he’d have said something like “Gosh! If we’d had gene editing back in the 70s we’d have mapped the worm in a couple of weeks!”

References

International Human Genome Sequencing Consortium Nature 431, 931–945; 2004.

John Sulston and Georgina Ferry The Common Thread: A Story of Science, Politics, Ethics and the Human Genome (Bantam Press, 2002).

Sulston, J.E. and Horvitz, H.R. (1977). Post-embryonic Cell Lineages of the Nematode, Caenorhabitis elegans. Development Biology 56, 110-156.

Trubiroha, A. et al. (2018). A Rapid CRISPR/Cas-based Mutagenesis Assay in Zebrafish for Identification of Genes Involved in Thyroid Morphogenesis and Function. Scientific Reports 8, Article number: 5647.

Bonkers Really … but …

 

This is just in case you spotted the headline in January 2018: ‘Scientists Counted All The Protein Molecules in a Cell And The Answer Really Is 42. This is so perfect.’ 

Them scientists eh! The things they get up to!! The scallywags in this case were Brandon Ho & chums from the University of Toronto and Signe Dean, the journalist who came up with the headline, was referring, of course, to Douglas Adams’s “Answer to the Ultimate Question of Life …” in The Hitchhiker’s Guide to the Galaxy — though it may be noted that Ho’s paper includes neither the number 42 nor mention of Douglas Adams.

The cult that has evolved around this number is both amusing and bizarre, not least because Adams himself explained that he dreamed 42 up out of the blue. In a different context a while ago (talking about how the way you get to work might affect your life expectancy) I recounted happy evenings spent carousing in The Baron (well, having a quiet jar or two) with Douglas Adams and friends from which it was clear that he was not into abstruse mathematics, astrology or the occult. He just had a vivid imagination.

Anything for a catchy headline but

Aside from the whimsy, is there anything interesting in this paper? Well, yes. Ho & Co studied a type of yeast (Saccharomyces cerevisiae) that is mighty important because it’s been a foundation for brewing and baking since ancient times. So no merry sessions in The Baron of Beef without it! Its cells are about the same size as red blood cells (5–10 microns in diameter) but you can actually see them sometimes as films on the skin of fruit. It’s played a huge role in biology as a ‘model organism’ for studying how we work because the proteins it makes that are essential for life are pretty well identical to those in human cells — so much so that you can swap those that control cell growth and division between the two. Yeast proteins work just fine in human cells and vice versa.

 

Yeast on the skin of a grape. Photo: Barbara W. Beacham

 

The question Ho & Co asked was ‘how many protein molecules are there in one cell?’ In the age when you can sequence the DNA of practically anything at the drop of a hat, you might think we’d know the answer already but in fact it’s not been at all clear. Accordingly, what these authors did was to pull together all the relevant studies that have been done to come up with an absolute figure. The answer that emerged was that the number of protein molecules per yeast cell is 4.2 x 107 — which, of course, can also be written as 42 million. Eureka! We have our headline!! Albeit, as the authors noted, with a two-fold error range.

Does anyone care?

Now you’re just being awkward. You should be grateful to be made to picture for a moment tens of millions of proteins jiggling around in little sacs so small you could get tens of thousands of these cells on the head of a pin. And somehow, in that heaving molecular city, each protein manages to carry out its own task so that the cell works. It is quite staggering.

Mention of tasks leads to the other question Ho et al looked at: how many copies are there of the different types of protein? We know from its DNA sequence that this yeast has about 6,000 genes (Saccharomyces Genome Database). So that’s at least 6,000 different proteins. Not surprisingly, it turns out that about two thirds of them are in the middle in terms of abundance — i.e. there’s between 1,000 and 10,000 molecules of each sort per cell. The rest are either low abundance (up to about 800 molecules per cell) or at the high end — 140,000 to 750,000, i.e. somewhere in the region of half a million copies of each type of protein.

Does this distribution make sense in terms of what these proteins do?

You know the answer because if it didn’t the Toronto team wouldn’t have got their work published but, indeed, proteins present in large numbers are, for example, part of the machinery that makes new proteins (so they’re slaving away all the time) whereas, those present in small numbers do things like repair and replicate DNA and drive cells to divide — important jobs but ones that are only intermittently needed.

These results aren’t going to turn science on its head but it is awe-inspiring when a piece of work really brings us face-to-face with stunning complexity of biology. And if it takes a bonkers headline to catch our eye, so be it!

Reference

Ho, B. et al. (2018). Unification of Protein Abundance Datasets Yields a Quantitative Saccharomyces cerevisiae Proteome. Cell Systems. Published online: January 23, 2018.

Hitchhiker Or Driver?

 

It’s a little while since we talked about what you might call our hidden self — the vast army of bugs that colonises our nooks and crannies, especially our intestines, and that is essential to our survival.

In Our Inner Self we noted that these little guys outnumber the human cells that make up the body by about ten to one. Actually that estimate has recently been revised — downwards you might be relieved to hear — to about 1.3 bacterial cells per human cell but it doesn’t really matter. They are a major part of what’s called the microbiome — a vast army of microorganisms that call our bodies home but on which we also depend for our very survival.

In our personal army there’s something like 700 different species of bacteria, with thirty or forty making up the majority. We upset them at our peril. Artificial sweeteners, widely used as food additives, can change the proportions of types of gut bacteria. Some antibiotics that kill off bacteria can make mice obese — and they probably do the same to us. Obese humans do indeed have reduced numbers of bugs and obesity itself is associated with increased cancer risk.

In it’s a small world we met two major bacterial sub-families, Bacteroidetes and Firmicutes, and noted that their levels appear to affect the development of liver and bowel cancers. Well, the Bs & Fs are still around you’ll be glad to know but in a recent piece of work the limelight has been taken by another bunch of Fs — a sub-group (i.e. related to the Bs & Fs) called Fusobacterium.

It’s been known for a few years that human colon cancers carry enriched levels of these bugs compared to non-cancerous colon tissues — suggesting, though not proving, that Fusobacteria may be pro-tumorigenic. In the latest, pretty amazing, installment Susan Bullman and colleagues from Harvard, Yale and Barcelona have shown that not merely is Fusobacterium part of the microbiome that colonises human colon cancers but that when these growths spread to distant sites (i.e. metastasise) the little Fs tag along for the ride! 

Bacteria in a primary human bowel tumour.  The arrows show tumour cells infected with Fusobacteria (red dots).

Bacteria in a liver metastasis of the same bowel tumour.  Though more difficult to see, the  red dot (arrow) marks the presence of bacteria from the original tumour. From Bullman et al., 2017.

In other words, when metastasis kicks in it’s not just the tumour cells that escape from the primary site but a whole community of host cells and bugs that sets sail on the high seas of the circulatory system.

But doesn’t that suggest that these bugs might be doing something to help the growth and spread of these tumours? And if so might that suggest that … of course it does and Bullman & Co did the experiment. They tried an antibiotic that kills Fusobacteria (metronidazole) to see if it had any effect on F–carrying tumours. Sure enough it reduced the number of bugs and slowed the growth of human tumour cells in mice.

Growth of human tumour cells in mice. The antibiotic metronidazole slows the growth of these tumour by about 30%. From Bullman et al., 2017.

We’re still a long way from a human therapy but it is quite a startling thought that antibiotics might one day find a place in the cancer drug cabinet.

Reference

Bullman, S. et al. (2017). Analysis of Fusobacterium persistence and antibiotic response in colorectal cancer. Science  358, 1443-1448. DOI: 10.1126/science.aal5240

RoboClot

 

It was the Chinese, inevitably, who invented paper – during the Eastern Han period around 200 CE (or AD as I’d put it). Presumably by 201 AD some of the lads at the back of the class had discovered that this new stuff could be folded and launched to land on the desk of the local Confucius, generating much hilarity and presumably a few whacks with a bamboo cane.

Folding molecules

Not to be outdone some 21st century scholars have shown that you can do molecular origami with DNA. The idea is fairly simple: take a long strand of DNA (several thousand bases) and persuade it to fold into specific shapes by adding ‘staples’ — short bits of DNA (oligonucleotides). When you mix them together the staples and scaffold strands self-assemble in a single step. It’s pretty amazing but it’s driven by the simple concept of Watson-Crick base pairing (adenine (A) binds to thymine (T): guanine (G) to cytosine (C)).

These things are, of course, almost incomprehensibly small — they are biological molecules remember — each being a few nanometers long. Which means that you can plonk a billion on the head of a pin.

Working on this scale has given rise to the science of nanorobotics ­— making gadgets on a nanometre scale (10−9 meters or one thousandth of a millionth of a metre) and the gizmos themselves are nanorobots — nanobots to their friends.

Making parcels of DNA must be great fun but it’s not much use until you include the fact that you can stick protein molecules to your DNA carrier. If you choose a protein that has a known target, for example, something on the surface of a cell, you can now mail the parcels to an address within the body simply by injecting them into the circulation.

Molecular origami: Making a DNA parcel with a targeting protein. A bacteriophage is a virus that infects and replicates in bacteria, used here to make single strands of DNA. Short DNA ‘staples’ are designed to fold the scaffold DNA into specific shapes. Adding an aptamer (e.g., a protein that binds to a specific target molecule on a cell (an antigen)) permits targeting of the nanobot. When it sticks to a cell the package opens and the molecular payload is released (from Fu and Yan, 2012).

Open with care

Hao Yan and colleagues from Arizona State University have now taken nanobots a step further by adding a second protein to their targeted vehicle. For their targeting protein they used something that sticks to a protein present on the surface of cells that line the walls of blood vessels when they are proliferating (the target protein’s called nucleolin). Generally these (endothelial) cells aren’t proliferating so they don’t make nucleolin — and the nanobots pass them by. But growing tumours need to make their own blood supply. To do that they stimulate new vessels to sprout into the tumour (called angiogenesis) and this is what Hao Yan’s nanobots target.

As an anti-cancer tactic the nanobots carried a second protein: thrombin. This is a critical part of the process of coagulation by which damaged blood vessels set about repairing themselves. Thrombin’s role is to convert fibrinogen (circulating in blood) to fibrin strands, hence building up a blood clot to plug the hole. In effect the nanobots cause thrombosis, inducing a blood clot to block the supply line to the tumour.

Blood clotting (coagulation). Platelets form a plug strengthened by fibrin produced by the action of thrombin.

Does it work?

These DNA nanorobots showed no adverse effects either in mice or in Bama miniature pigs, which exhibit high similarity to humans in anatomy and physiology.

Fluorescently labeled nanobots did indeed target tumour blood vessels: the DNA wrapping opens when they attach to cells and the thrombin is released …

Fluorescent nanobots targetting tumour blood vessels (Li et al. 2018). The nanorobots have stuck to cells lining blood vessels (endothelial cells: green membrane) by attaching to nucleolin. After 8 hours the nanorobots (red) have been taken up by the cells and can be seen next to the nucleus (blue).

Most critically these little travellers did have effects on tumour growth. The localized thrombosis caused by the released thrombin resulted in significant tumour cell death and marked increase in the survival of treated mice.

Robotic DNA machines are now being referred to as ‘intelligent vehicles’ — a designation I’m not that keen on. Nevertheless, this is a cunning strategy, not least because, although much effort has gone into anti-angiogenic therapies for cancer, they have not been notably successful. Simply administering thrombin would presumably be fatal but, well wrapped up and correctly addressed, it seems to deliver.

Reference

Fu, J. and Yan, H. (2012). Controlled drug release by a nanorobot. Nature Biotechnology 30, 407-408.

Suping Li et al. (2018). A DNA nanorobot functions as a cancer therapeutic in response to a molecular trigger in vivo. Nature Biotechnology doi:10.1038/nbt.4071