One More Small Step


Back in the nineteenth century a chap called Augustus De Morgan came up with a set of laws that, when explained in English, sound like the lyrics of a Flanders & Swann song. Opaque to non-maths nerds they may be but they helped to build the mathematics of logic, so next time you meet AND / OR gates in electronics, spare him a thought.

In fact Augustus is rare — maybe unique — among mathematicians in that he’s not completely forgotten, for it was he who penned the lines:

Big fleas have little fleas upon their backs to bite ’em,
And little fleas have lesser fleas, and so, 
ad infinitum.

Given that we now know there’s over 2,500 species of fleas ranging in size from tiny to nearly one centimeter long, it may be literally true. But here, for once, the truth doesn’t matter. It’s a silly rhyme but nonsense verse it is not for it could well serve as a motto for biology because it really captures the essential truth of life: the exquisite choreography of living systems by which incomprehensible numbers of interactions come together to make them work.

Human fleas. Don’t worry: you’ll know if you have them.

Unbidden, De Morgan’s ditty came into my head as I was reading the latest research paper from David Lyden’s group, which he very kindly sent me ahead of publication this week. Avid readers will know the name for we have devoted several episodes (Keeping Cancer Catatonic, Scattering the Bad Seed and Holiday Reading (4) – Can We Make Resistance Futile) to the discoveries of his group in tackling one of the key questions in cancer — namely, how do tumour cells find their targets when they spread around the body? Key because it is this process of ‘metastasis’ that causes most (over 90%) of cancer deaths and if we knew how it worked maybe we could block it.

A succinct summary of those already condensed episodes would be: (1) cells in primary tumours release ‘messengers’ into the circulation that ‘tag’ metastatic sites before any cells actually leave the tumour, (2) the messengers that do the site-tagging are small sacs — mini cells — called exosomes, and (3) they find specific addresses by carrying protein labels (integrins) that home in to different organs — we represented that in the form of a tube train map in Lethal ZIP codes that pulled the whole story together.

The next small step

Now what the folks from Weill Cornell Medicine, New York, Sloan Kettering and a host of other places have done is adapt a flow system to look more closely at exosomes.

Separating small bodies. Particles are injected into a flowing liquid (left) and cross flow at right angles through a membrane (bottom) permits separation on the basis of effective size (called asymmetrical flow field-flow fractionation).

They found that a wide variety of tumour cell types secrete two distinct populations of exosomes — small (60-80 nanometres diameter) and large (90-120 nm). What’s more they found a third type of nanoparticle, smaller than exosomes (less than 50 nm) and without a membrane — so it’s a kind of blob of lipids and proteins (a micelle would be a more scientific term) — that they christened exomeres.

Is it real?

A perpetual problem in biology is reproducibility — that is, whether a new finding can be replicated independently by someone else. Or, put more crudely, do I believe this? This is such an important matter that it’s worth a separate blog but for the moment we’re OK because the results in this paper speak for themselves. First, by using electron microscopy, Lyden et al could actually look at what they’d isolated and indeed discerned three distinct nano-populations — which is how they were able to put the size limits on them.

Electron microscopy of (left) the input mixture (pre-fractionation) and separated fractions: exomere, small exosomes and large exosomes released by tumour cells.. Arrows indicate exomeres (red), small exosomes (blue) and large exosomes (green), from Zhang et al. 2018.

But what’s most exciting in terms of the potential of these results is what’s in the packets. Looking at the fats (lipids), proteins and nucleic acids (DNA and RNA) they contained it’s clear that these are three distinct entities — which makes it very likely they have different effects.

Given their previous finding it must have been a great relief when Lyden & Co identified integrin address proteins in the two exosome sub-populations. But what’s really astonishing is the range of proteins born by these little chaps: something like 400 in exomeres, about 1000 in small exosomes and a similar number in the big ones — and the fact that each contained unique sets of proteins. The new guys — exomeres — carry among other proteins, metabolic enzymes so it’s possible that when they deliver their cargo it might be able to change the metabolic profile of its target. That could be important as we know such changes happen in cancer.

It’s a bewildering picture and working out even the basics of what these little guys do and how it influences cancer is, as we say, challenging. But I think I know a good man for the job!

Augustus De Morgan looking down.

Mathematicians have a bit of a tendency to look down on us experimentalists thrashing around in the undergrowth and I suspect that up in the celestial library, as old Augustus De Morgan thumbed through this latest paper, a slight smile might have come over his face and he could have been heard to murmur: “See, I told you.”


Zhang, H. et al. (2018). Identification of distinct nanoparticles and subsets of extracellular vesicles by asymmetric flow field-flow fractionation. Nature Cell Biology 20, 332–343. doi:10.1038/s41556-018-0040-4


You Couldn’t Make It Up … But They Do!!


Having just posted a somewhat critical commentary on a recent, much-headlined, study looking at the effect of ‘ultra-processed’ food on cancer risk that was based on what folk said they ate, who should come galloping into the fray this morning but the Office for National Statistics (ONS).

They’ve analysed a National Diet and Nutrition survey and, surprise surprise, found that the adults surveyed (4,500 of them) said they ate 50% fewer calories than they were actually tucking away!

So much for relying on people telling the truth!!

How do they know?

Well, they persuaded 200 punters to drink doubly labelled water as part of their diet (the water is made of chemical variants of hydrogen and oxygen, deuterium and oxygen-18) and pee the truth into a bottle (from the proportions of deuterium and 18O  in urine you can work out calorie consumption).

The upshot of all this is that, whilst a rough average figure for desirable calorie intake is 2,500 for a man and 2,000 for a woman, the 4,500 were eating the equivalent of an extra Big Mac a day, with men consuming 3,119 calories rather than the 2,065 they claimed. Women consumed 2,393 calories instead of 1,570.

Actually, this didn’t come as a great surprise to the ONS guys because they’d spotted that 1 in 3 (34%) of the 4,500 claimed a calorie consumption figure that wouldn’t keep them alive! And, guess what, overweight people and men (of course) are most likely to tell dietary fibs.

Oh dear, I told the French folk in Please … Not Another Helping they shouldn’t believe a word people said. Of course, they will be quick to point out that the ONS is a British outfit reporting on Brits who are notorious cads and bounders. That’s OK then: we can confidently believe what the French tell us about their eating habits — just as we accept that they are the best lovers and have the most sex.

Please … Not Another Helping


You may have seen the headlines of the: “Processed food, sugary cereals and sliced bread may contribute to cancer risk” ilk, as this recently published study (February 2018) was extensively covered in the media — the Times of London had a front page spread no less.

So I feel obliged to follow suit — albeit with a heavy heart: it’s one of those depressing exercises in which you’re sure you know the answer before you start.

Who dunnit?

It’s a mainly French study (well, it is about food) led by Thibault Fiolet, Mathilde Touvier and colleagues from the Sorbonne in Paris. It’s what’s called a prospective cohort study, meaning that a group of individuals, who in this case differed in what they ate, were followed over time to see if diet affected their risk of getting cancers and in particular whether it had any impact on breast, prostate or colorectal cancer. They started acquiring participants about 20 years ago and their report in the British Medical Journal summarized how nearly 105 thousand French adults got on consuming 3,300 (!) different food items between them, based on each person keeping 24 hour dietary records designed to record their usual consumption.

Foods were grouped according to degree of processing. The stuff under the spotlight is ‘ultra-processed’ — meaning that it has been chemically tinkered with to get rid of bugs, give it a long shelf-life, make it convenient to use, look good and taste palatable.

What makes a food ‘ultra-processed’ is worked out by something called the NOVA classification. I’ve included their categories at the end.

Relative contribution of each food group to ultra-processed food consumption in diet (from Fiolet et al. 2018).

And the result?

The first thing to be said is that this study is a massive labour of love. You need the huge number of over 100,000 cases even to begin to squeeze out statistically significant effects — so the team has put in a terrific amount of work.

After all the squeezing there emerged a marginal increase in risk of getting cancer in the ultra-processed food eaters and a similar slight increase specifically for breast cancer (the hazard ratios were 1.12 and 1.11 respectively). There was no significant link to prostate and colorectal cancers.

Which may mean something. But it’s hard to get excited, not merely because the effects described are small but more so because such studies are desperately fraught and the upshot familiar.

One problem is that they rely on individuals keeping accurate records. Another problem here is that the classification of ‘ultra-processed’ is somewhat arbitrary — and it’s also very broad — leaving one asking what the underlying cause might be: ‘is it sugar, fat or what?’ Furthermore, although the authors tried manfully to allow for factors like smoking and obesity, it’s impossible to do this with complete certainty. The authors themselves noted that, for example, they couldn’t allow for the effects of oral contraception.

The authors are quite right to point out that it is important to disentangle the facets of food processing that bear on our long-term health and that further studies are needed.

I would only add ‘rather you than me.’

Perforce in these pages we have gone on about diets good and bad so there is no need to regurgitate. Suffice to say that my advice on what to eat is the same as that of any other sane person and summarized in Dennis’s Pet Menace — and it’s not been remotely affected by this new research which, in effect, says ‘junk food is probably bad for you in the long run.’ But let’s leave the last word to Tom Sanders of King’s College London: “What people eat is an expression of their life-style in general, and may not be causatively linked to the risk of cancer.” 


Fiolet, T. et al. (2018). Consumption of ultra-processed foods and cancer risk: results from NutriNet-Santé prospective cohort. BMJ 2018;360:k322

NOVA classification:

The ultra-processed food group is defined by opposition to the other NOVA groups: “unprocessed or minimally processed foods” (fresh, dried, ground, chilled, frozen, pasteurised, or fermented staple foods such as fruits, vegetables, pulses, rice, pasta, eggs, meat, fish, or milk), “processed culinary ingredients” (salt, vegetable oils, butter, sugar, and other substances extracted from foods and used in kitchens to transform unprocessed or minimally processed foods into culinary preparations), and “processed foods” (canned vegetables with added salt, sugar coated dried fruits, meat products preserved only by salting, cheeses, freshly made unpackaged breads, and other products manufactured with the addition of salt, sugar, or other substances of the “processed culinary ingredients” group).

Sweet Love …


Sweet love, renew thy force; be it not said

Thy edge should blunter be than appetite,

Which but to-day by feeding is allay’d,

To-morrow sharpen’d in his former might:

No prize for knowing I didn’t write those lines — or even that they’re down to The Bard of Avon. What he was on about here is the distinction between genuine (sweet) love and lust (appetite), the problem being that the latter may be assuaged today but will surely return tomorrow. Had we, by some Star Trek-like device, been able to secure his services for this piece, Shakespeare, master of the double-entendre, would quickly have spotted an opportunity in his new role as pop-sci scribe. For sweet read sugar: for appetite addiction.

Gary Taubes considers sugar to be the root of most western illnesses. Photograph: Alamy

The combination can be toxic, as the estimable US journalist Gary Taubes has argued over the last 15 years. His latest book The Case Against Sugar has just come out and I’m keen to give it a plug. In so doing I should point out that we’ve also done our best in these pages to make the same case — particularly in relation to cancer. However, it’s a little while since we wrote specifically on sugar, diet and cancer, mainly because nothing really new has caught my eye. Reading again the most relevant of our blog stories I thought they did a pretty good job (as Shakespeare might have said, being a chap not known for modesty). Three I thought worth looking at again are:

Biting the Bitter Bullet: how obesity and cancer quite often come hand-in-hand and how it is that we’re seduced into eating more and more of something that can help us get fat and ill.

A Small Helping For Australia: makes the point that this is a global problem (even though Australia’s wonderful).

The Best Laid Plans in Mice and Men..: artificial sweeteners aren’t the solution – just another problem.

Actually, there is one recent result we might mention — from Ken Peeters, Johan Thevelein & colleagues at the University of Leuven. Bearing in mind the long-established ‘Warburg effect’ by which cancer cells switch the energy supply system that breaks down glucose from respiration (using oxygen) to fermentation (making lactate), they looked at yeast cells that grow fastest when they ferment — much as cancer cells grow quicker than normal cells. Rather remarkably, they discovered a hitherto unknown way in which fermentation links to a key pathway controlling cell proliferation. That pathway centres around a protein called RAS that we met in Mission Impossible.

This finding does not show that eating lots of sugar gives you cancer but what it does show is a way by which, if yeast cells ‘eat’ more sugar, they grow faster. It seems quite possible that the underlying mechanism might work in human cells (the human version of the protein that links sugar metabolism to RAS, called SOS1, works in yeast) — giving an explanation for the well-known fact that the more sugar you eat the fatter you are likely to become. And what we do know is that obesity does raise cancer risk.

I dare say Gary might reckon this result worth a footnote in the second edition of: The Case Against Sugar by Gary Taubes is published by Portobello Books (£14.99).


Peeters, K. et al., (2017). Fructose-1,6-bisphosphate couples glycolytic flux to activation of Ras. Nature Communications 8, Article number: 922 doi:10.1038/s41467-017-01019-z.

Desperately SEEKing …

These days few can be unaware that cancers kill one in three of us. That proportion has crept up over time as life expectancy has gone up — cancers are (mainly) diseases of old age. Even so, they plagued the ancients as Egyptian scrolls dating from 1600 BC record and as their mummified bodies bear witness. Understandably, progress in getting to grips with the problem was slow. It took until the nineteenth century before two great French physicians, Laënnec and Récamier, first noted that tumours could spread from their initial site to other locations where they could grow as ‘secondary tumours’. Munich-born Karl Thiersch showed that ‘metastasis’ occurs when cells leave the primary site and spread through the body. That was in 1865 and it gradually led to the realisation that metastasis was a key problem: many tumours could be dealt with by surgery, if carried out before secondary tumours had formed, but once metastasis had taken hold … With this in mind the gifted American surgeon William Halsted applied ever more radical surgery to breast cancers, removing tissues to which these tumors often spread, with the aim of preventing secondary tumour formation.

Early warning systems

Photos of Halsted’s handiwork are too grim to show here but his logic could not be faulted for metastasis remains the cause of over 90% of cancer deaths. Mercifully, rather than removing more and more tissue targets, the emphasis today has shifted to tumour detection. How can they be picked up before they have spread?

To this end several methods have become familiar — X-rays, PET (positron emission tomography, etc) — but, useful though these are in clinical practice, they suffer from being unable to ‘see’ small tumours (less that 1 cm diameter). For early detection something completely different was needed.

The New World

The first full sequence of human DNA (the genome), completed in 2003, opened a new era and, arguably, the burgeoning science of genomics has already made a greater impact on biology than any previous advance.

Tumour detection is a brilliant example for it is now possible to pull tumour cell DNA out of the gemisch that is circulating blood. All you need is a teaspoonful (of blood) and the right bit of kit (silicon chip technology and short bits of artificial DNA as bait) to get your hands on the DNA which can then be sequenced. We described how this ‘liquid biopsy’ can be used to track responses to cancer treatment in a quick and non–invasive way in Seeing the Invisible: A Cancer Early Warning System?

If it’s brilliant why the question mark?

Two problems really: (1) Some cancers have proved difficult to pick up in liquid biopsies and (2) the method didn’t tell you where the tumour was (i.e. in which tissue).

The next step, in 2017, added epigenetics to DNA sequencing. That is, a programme called CancerLocator profiled the chemical tags (methyl groups) attached to DNA in a set of lung, liver and breast tumours. In Cancer GPS? we described this as a big step forward, not least because it detected 80% of early stage cancers.

There’s still a pesky question mark?

Rather than shrugging their shoulders and saying “that’s science for you” Joshua Cohen and colleagues at Johns Hopkins University School of Medicine in Baltimore and a host of others rolled their sleeves up and made another step forward in the shape of CancerSEEK, described in the January 18 (2018) issue of Science.

This added two new tweaks: (1) for DNA sequencing they selected a panel of 16 known ‘cancer genes’ and screened just those for specific mutations and (2) they included proteins in their analysis by measuring the circulating levels of 10 established biomarkers. Of these perhaps the most familiar is cancer antigen 125 (CA-125) which has been used as an indicator of ovarian cancer.

Sensitivity of CancerSEEK by tumour type. Error bars represent 95% confidence intervals (from Cohen et al., 2018).

The figure shows a detection rate of about 70% for eight cancer types in 1005 patients whose tumours had not spread. CancerSEEK performed best for five types (ovary, liver, stomach, pancreas and esophagus) that are difficult to detect early.

Is there still a question mark?

Of course there is! It’s biology — and cancer biology at that. The sensitivity is quite low for some of the cancers and it remains to be seen how high the false positive rate goes in larger populations than 1005 of this preliminary study.

So let’s leave the last cautious word to my colleague Paul Pharoah: “I do not think that this new test has really moved the field of early detection very far forward … It remains a promising, but yet to be proven technology.”


D. Cohen et al. (2018). Detection and localization of surgically resectable cancers with a multi-analyte blood test. Science 10.1126/science.aar3247.

Lorenzo’s Oil for Nervous Breakdowns


A Happy New Year to all our readers – and indeed to anyone who isn’t a member of that merry band!

What better way to start than with a salute to the miracles of modern science by talking about how the lives of a group of young boys have been saved by one such miracle.

However, as is almost always the way in science, this miraculous moment is merely the latest step in a long journey. In retracing those steps we first meet a wonderful Belgian – so, when ‘name a famous Belgian’ comes up in your next pub quiz, you can triumphantly produce him as a variant on dear old Eddy Merckx (of bicycle fame) and César Franck (albeit born before Belgium was invented). As it happened, our star was born in Thames Ditton (in 1917: his parents were among the one quarter of a million Belgians who fled to Britain at the beginning of the First World War) but he grew up in Antwerp and the start of World War II found him on the point of becoming qualified as a doctor at the Catholic University of Leuven. Nonetheless, he joined the Belgian Army, was captured by the Germans, escaped, helped by his language skills, and completed his medical degree.

Not entirely down to luck

This set him off on a long scientific career in which he worked in major institutes in both Europe and America. He began by studying insulin (he was the first to suggest that insulin lowered blood sugar levels by prompting the liver to take up glucose), which led him to the wider problems of how cells are organized to carry out the myriad tasks of molecular breaking and making that keep us alive.

The notion of the cell as a kind of sac with an outer membrane that protects the inside from the world dates from Robert Hooke’s efforts with a microscope in the 1660s. By the end of the nineteenth century it had become clear that there were cells-within-cells: sub-compartments, also enclosed by membranes, where special events took place. Notably these included the nucleus (containing DNA of course) and mitochondria (sites of cellular respiration where the final stages of nutrient breakdown occurs and the energy released is transformed into adenosine triphosphate (ATP) with the consumption of oxygen).

In the light of that history it might seem a bit surprising that two more sub-compartments (‘organelles’) remained hidden until the 1950s. However, if you’re thinking that such a delay could only be down to boffins taking massive coffee breaks and long vacations, you’ve never tried purifying cell components and getting them to work in test-tubes. It’s a process called ‘cell fractionation’ and, even with today’s methods, it’s a nightmare (sub-text: if you have to do it, give it to a Ph.D. student!).

By this point our famous Belgian had gathered a research group around him and they were trying to dissect how insulin worked in liver cells. To this end they (the Ph.D. students?!) were using cell fractionation and measuring the activity of an enzyme called acid phosphatase. Finding a very low level of activity one Friday afternoon, they stuck the samples in the fridge and went home. A few days later some dedicated soul pulled them out and re-measured the activity discovering, doubtless to their amazement, that it was now much higher!

In science you get odd results all the time – the thing is: can you repeat them? In this case they found the effect to be absolutely reproducible. Leave the samples a few days and you get more activity. Explanation: most of the enzyme they were measuring was contained within a membrane-like barrier that prevented the substrate (the chemical that the enzyme reacts with) getting to the enzyme. Over a few days the enzyme leaked through the barrier and, lo and behold, now when you measured activity there was more of it!

Thus was discovered the ‘lysosome’ – a cell-within-a cell that we now know is home to an array of some 40-odd enzymes that break down a range of biomolecules (proteinsnucleic acidssugars and lipids). Our self-effacing hero said it was down to ‘chance’ but in science, as in other fields of life, you make your own luck – often, as in this case, by spotting something abnormal, nailing it down and then coming up with an explanation.

In the last few years lysosomes have emerged as a major player in cancer because they help cells to escape death pathways. Furthermore, they can take up anti-cancer drugs, thereby reducing potency. For these reasons they are the focus of great interest as a therapeutic target.

Lysosomes in cells revealed by immunofluorescence.

Antibody molecules that stick to specific proteins are tagged with fluorescent labels. In these two cells protein filaments of F-actin that outline cell shape are labelled red. The green dots are lysosomes (picked out by an antibody that sticks to a lysosome protein, RAB9). Nuclei are blue (image: ThermoFisher Scientific).

Play it again Prof!

In something of a re-run of the lysosome story, the research team then found itself struggling with several other enzymes that also seemed to be shielded from the bulk of the cell – but the organelle these lived in wasn’t a lysosome – nor were they in mitochondria or anything else then known. Some 10 years after the lysosome the answer emerged as the ‘peroxisome’ – so called because some of their enzymes produce hydrogen peroxide. They’re also known as ‘microbodies’ – little sacs, present in virtually all cells, containing enzymatic goodies that break down molecules into smaller units. In short, they’re a variation on the lysosome theme and among their targets for catabolism are very long-chain fatty acids (for mitochondriacs the reaction is β-oxidation but by a different pathway to that in mitochondria).

Peroxisomes revealed by immunofluorescence.

As in the lysosome image, F-actin is red. The green spots here are from an antibody that binds to a peroxisome protein (PMP70). Nuclei are blue (image: Novus Biologicals)

Cell biology fans will by now have worked out that our first hero in this saga of heroes is Christian de Duve who shared the 1974 Nobel Prize in Physiology or Medicine with Albert Claude and George Palade.

A wonderful Belgian. Christian de Duve: physician and Nobel laureate.


Fascinating and important stuff – but nonetheless background to our main story which, as they used to say in The Goon Show, really starts here. It’s so exciting that, in 1992, they made a film about it! Who’d have believed it?! A movie about a fatty acid!! Cinema buffs may recall that in Lorenzo’s Oil Susan Sarandon and Nick Nolte played the parents of a little boy who’d been born with a desperate disease called adrenoleukodystrophy (ALD). There are several forms of ALD but in the childhood disease there is progression to a vegetative state and death occurs within 10 years. The severity of ALD arises from the destruction of myelin, the protective sheath that surrounds nerve fibres and is essential for transmission of messages between brain cells and the rest of the body. It occurs in about 1 in 20,000 people.

Electrical impulses (called action potentials) are transmitted along nerve and muscle fibres. Action potentials travel much faster (about 200 times) in myelinated nerve cells (right) than in (left) unmyelinated neurons (because of Saltatory conduction). Neurons (or nerve cells) transmit information using electrical and chemical signals.

The film traces the extraordinary effort and devotion of Lorenzo’s parents in seeking some form of treatment for their little boy and how, eventually, they lighted on a fatty acid found in lots of green plants – particularly in the oils from rapeseed and olives. It’s one of the dreaded omega mono-unsaturated fatty acids (if you’re interested, it can be denoted as 22:1ω9, meaning a chain of 22 carbon atoms with one double bond 9 carbons from the end – so it’s ‘unsaturated’). In a dietary combination with oleic acid  (another unsaturated fatty acid: 18:1ω9) it normalizes the accumulation of very long chain fatty acids in the brain and slows the progression of ALD. It did not reverse the neurological damage that had already been done to Lorenzo’s brain but, even so, he lived to the age of 30, some 22 years longer than predicted when he was diagnosed.

What’s going on?

It’s pretty obvious from the story of Lorenzo’s Oil that ALD is a genetic disease and you will have guessed that we wouldn’t have summarized the wonderful career of Christian de Duve had it not turned out that the fault lies in peroxisomes.

The culprit is a gene (called ABCD1) on the X chromosome (so ALD is an X-linked genetic disease). ABCD1 encodes part of the protein channel that carries very long chain fatty acids into peroxisomes. Mutations in ABCD1 (over 500 have been found) cause defective import of fatty acids, resulting in the accumulation of very long chain fatty acids in various tissues. This can lead to irreversible brain damage. In children the myelin sheath of neurons is damaged, causing neurological defects including impaired vision and speech disorders.

And the miracle?

It’s gene therapy of course and, helpfully, we’ve already seen it in action. Self Help – Part 2 described how novel genes can be inserted into the DNA of cells taken from a blood sample. The genetically modified cells (T lymphocytes) are grown in the laboratory and then infused into the patient – in that example the engineered cells carried an artificial T cell receptor that enabled them to target a leukemia.

In Gosh! Wonderful GOSH we saw how the folk at Great Ormond Street Hospital adapted that approach to treat a leukemia in a little girl.

Now David Williams, Florian Eichler, and colleagues from Harvard and many other centres around the world, including GOSH, have adapted these methods to tackle ALD. Again, from a blood sample they selected one type of cell (stem cells that give rise to all blood cell types) and then used genetic engineering to insert a complete, normal copy of the DNA that encodes ABCD1. These cells were then infused into patients. As in the earlier studies, they used a virus (or rather part of a viral genome) to get the new genetic material into cells. They choose a lentivirus for the job – these are a family of retroviruses (i.e. they have RNA genomes) that includes HIV. Specifically they used a commercial vector called Lenti-D. During the life cycle of RNA viruses their genomes are converted to DNA that becomes a permanent part of the host DNA. What’s more, lentiviruses can infect both non-dividing and actively dividing cells, so they’re ideal for the job.

In the first phase of this ongoing, multi-centre trial a total of 17 boys with ALD received Lenti-D gene therapy. After about 30 months, in results reported in October 2017, 15 of the 17 patients were alive and free of major functional disability, with minimal clinical symptoms. Two of the boys with advanced symptoms had died. The achievement of such high remission rates is a real triumph, albeit in a study that will continue for many years.

In tracing this extraordinary galaxy, one further hero merits special mention for he played a critical role in the story. In 1999 Jesse Gelsinger, a teenager, became the first person to receive viral gene therapy. This was for a metabolic defect and modified adenovirus was used as the gene carrier. Despite this method having been extensively tested in a range of animals (and the fact that most humans, without knowing it, are infected with some form of adenovirus), Gelsinger died after his body mounted a massive immune response to the viral vector that caused multiple organ failure and brain death.

This was, of course, a huge set-back for gene therapy. Despite this, the field has advanced significantly in the new century, both in methods of gene delivery (including over 400 adenovirus-based gene therapy trials) and in understanding how to deal with unexpected immune reactions. Even so, to this day the Jesse Gelsinger disaster weighs heavily with those involved in gene therapy for it reminds us all that the field is still in its infancy and that each new step is a venture into the unknown requiring skill, perseverance and bravery from all involved – scientists, doctors and patients. But what better encouragement could there be than the ALD story of young lives restored.

It’s taken us a while to piece together the main threads of this wonderful tale but it’s emerged as a brilliant example of how science proceeds: in tiny steps, usually with no sense of direction. And yet, despite setbacks, over much time, fragments of knowledge come together to find a place in the grand jigsaw of life.

In setting out to probe the recesses of metabolism, Christian de Duve cannot have had any inkling that he would build a foundation on which twenty-first century technology could devise a means of saving youngsters from a truly terrible fate but, my goodness, what a legacy!!!


Eichler, F. et al. (2017). Hematopoietic Stem-Cell Gene Therapy for Cerebral Adrenoleukodystrophy. The New England Journal of Medicine 377, 1630-1638.


Much Ado About … Some Things

Given that the ‘festive season’ is approaching, maybe we should try to find something joyous to say about cancer. It’s not difficult. Over the last 60 years (1950-2013) the 5-year Relative Survival Rates for white Americans for breast and prostate cancers have gone from about 50% to over 90% (99.6% in fact for prostate). A number of other types (e.g., testicular cancer) are now largely curable, if treated early enough. Similar trends have occurred in most developed countries – all this through advances in surgery and radiotherapy but, most of all, because of new drugs.

Big Pharma

It’s big business. According to the Financial Times, annual spending on cancer drugs hit $100 billion worldwide in 2014 and is projected to exceed $150 billion by 2020. As you would hope, this expenditure on drug development and production has resulted in a gradual rise in available cancer drugs, represented below by the number of new cancer drugs approved each year by the American Food and Drug Administration (FDA).

Number of new cancer drugs approved each year by the American Food and Drug Administration from 1949 to 2016 (from Hope Cristol, The American Cancer Society, 2016).

Data compiled from, National Cancer Institute, FDA Orange Book,, and Reporting and analysis by Sabrina Singleton, ACS research historian.

We should note that the FDA equivalent on this side of the Atlantic is the European Medicines Agency (EMA) and they tend to follow similar licensing patterns. Thus in 2016 a total of 74 new drug approvals were granted by the FDA and the EMA — 19 by the EMA only, 19 by only the FDA, with 36 approved by both. Of the drugs approved by the EMA in 2016, 17 had received prior FDA approval (i.e. in 2015 or earlier). However, only six drugs registered in the US in 2016 had prior EMA approval, indicating that drug companies tend to apply for approval in the US first before registering their products in the EU.

So rejoice and be merry — and drink to the triumph of science!!

It’s not unbounded joy, of course, because global cancer incidence continues to rise and a number of cancers (e.g., lung, liver and pancreas) remain refractive to all approaches thus far with survival rates stuck below 20%.

A Winter’s Tale

But what’s this? A further, wintry blast of reality from The British Medical Journal no less. It comes from Courtney Davis and her friends at King’s College London and the London School of Economics and Political Science (LSE) who looked at the track record of cancer drugs approved by the EMA between 2009 and 2013. Over this period the EMA approved the use of 48 new cancer drugs.

Charge your glass

It might be a good idea to sit down with a stiff drink at this point and remind ourselves that there are only two aims for cancer drugs: they must either extend the life of the patient or improve their quality of life.

What Dr. D & chums found was — and here, to be absolutely clear, we should quote exactly what they said — “… that most drugs entered the market without evidence of benefit on survival or quality of life. At a minimum of 3.3 years after market entry, there was still no conclusive evidence that these drugs either extended or improved life for most cancer indications. When there were survival gains over existing treatment options or placebo, they were often marginal.”

To be precise, it was 57% (39 of the 68 drugs) that entered the market with no evidence that they improved survival or quality of life.


What does this mean – and how can it be?

Well, first up, clearly a lot of money has been spent by drug companies and health services for absolutely no benefit to patients. Unsurprisingly the authors of the study called on the EMA to “increase the evidence bar for the market authorisation of new cancer drugs.” Which I take to mean ‘get some meaningful data before you stick stuff out there.’ But here’s where things get tricky. If your aim is to extend life, how can you prove a drug works other than by giving it to a significant number of patients and waiting a long time to see what happens?

The way round this has been for clinical trials to use indirect or “surrogate” measures of drug efficacy. The idea is that these endpoints show whether a drug has biological activity and thus might be of clinical use. However, they are not reliable measures of improved quality of life or survival.

So this report leaves us with a long-standing problem. On the one hand there is the understandable drive to get new drugs to patients asap but, on the other, there is the fact that only human beings can model how well a drug works in us. However good your in vitro systems may be and however closely mice may resemble men, they’re not the real thing.

One thing we could do that the report suggests, is to integrate the development and commercialization of cancer drugs at least across the two biggest markets of America and Europe so that the FDA and the EMA don’t appear to be operating in parallel worlds.

All told then, perhaps we should supplant our earlier merriment with the chilling thought that, even after so many years of perspiration and inspiration, cancers still present an immense challenge.


Davis, C. et al. (2017). Availability of evidence of benefits on overall survival and quality of life of cancer drugs approved by European Medicines Agency: retrospective cohort study of drug approvals 2009-13. BMJ 2017;359:j4530 doi: 10.1136/bmj.j4530 (Published 2017 October 03).

SEER Cancer Statistics Review (CSR) 1975-2014, updated June 28, 2017.

Cristol, H. (2016). Evolution and Future of Cancer Treatments, The American Cancer Society.


A Musical Offering 

It’s generally accepted that Johann Sebastian Bach was one of the greatest, if not the greatest, musical composer of all time. In well over 1000 compositions he laid down the framework upon which rested virtually all Western music of the following 200 years. Of these works, The Musical Offering, written in 1747, is a collection of pieces based on a single theme that has been described as the most significant piano composition in history.

Along the way to becoming a unique composer, Bach married twice and sired twenty children, only ten of whom survived into adulthood. Those figures highlight another way in which JSB was something of a freak because, in 1750 when he died aged 65, the average life expectancy in Europe was under 40 years. For that reason cancers, being primarily being diseases of old age, were much less prominent then than now when, on average, we live to be over 80 and cancers account for about one in three deaths.

It’s safe to say that in the 18th century neither Bach nor anyone else knew anything of cancer yet alone that our genetic material carries tens of thousands of genes – a kind of molecular keyboard upon which cellular machinery plays to produce an output of proteins that distinguishes one cell type from another but is also continuously varying, even within individual cells. Bach would have been fascinated by this fluctuating molecular mosaic that, through the wonders of modern sequencing methods, we can display as ‘heat maps’ showing which genes are turned on (being expressed) and to what level.

Musical genes. Left: a heat map showing the pattern of genes being expressed at a given time in several different types of cell. Red: high expression level; green low expression. On the right is the same information transformed into musical notation using the Gene Expression Music Algorithm, GEMusicA (from Staege 2016).

With commendable vision a chap by the name of Martin Staege has come up with an alternative way of looking at the rather mind-blowing picture conveyed by heat maps. Staege is in the Martin Luther University of Halle-Wittenberg – appropriately as Bach’s eldest son studied at the University of Halle. His idea is that gene expression patterns can be transformed into sounds characterized by their frequency (pitch) and tone duration. In other words you can make genes play tunes – and what’s more compare the notes from different cell samples (e.g., normal and tumour cells) so that you can ‘hear’ the differences in gene expression.

Remarkable or what?!

Unsurprisingly, gene tunes sound more Alban Berg than Magic Flute, prompting the redoubtable Dr. Staege to go one step further by producing an algorithm that fits gene themes as best it can to more singable pieces – so you get a kind of difference melody. I don’t think Beethoven or Wagner would see this biological music as a threat and they might, like me, ask ‘what’s the point?’

To which, I guess, the answers are ‘It’s clever and fun’. It’s also yet another way of showing the power of DNA as an information storage medium, and making the point that in this guise it may, in due course, make a massive impact on our lives – much more mundane than musical genes but hugely more useful.


Staege, M. S. (2016). Gene Expression Music Algorithm-Based Characterization of the Ewing Sarcoma Stem Cell Signature. Stem Cells International
Volume 2016, Article ID 7674824, 10 pages http://dx.

Staege, M. S. (2015). A short treatise concerning a musical approach for the interpretation of gene expression data. Sci. Rep. 5, 15281.







The Blame Game

“Why do people get it?” — perhaps the most frequently asked question about cancer. But nowadays most of us can come up with a quick answer: “It’s mutations” — that is, damage to DNA, our genetic material. Such changes are most commonly to the sequence of DNA (alterations to individual bases or loss or gain of bits). That’s a molecular biologist’s answer. What we really want to know is “Why?” How do these changes come about and, of course, what can we do about them?

The things we do …

We’ve known part of the answer for a long time — it’s what we do to ourselves, stupid! The best known example is smoking, shown to cause lung cancer in the 1930s. We know now that chemicals in cigarette smoke damage DNA and they’re so good at it that over 90% of lung cancer is down to smoking. So pernicious is the habit that it killed about 100 million people in the twentieth century and, unless something pretty drastic happens, the number of deaths this century will be one billion.

It’s true, we have made some progress. Most countries now regulate smoking. Bhutan (bless ’em) was the first nation to outlaw smoking in all public places. In the UK tobacco advertising is banned, as is smoking in all work places. But none of this happened until we’d got well into the twenty-first century! Oh, and if you’re wondering how the country that likes to style itself the world leader is getting on, Congress has so far managed to avoid passing any nationwide smoking ban and left it to individual states — with the result that across the USA laws range from total bans to no regulation of smoking at all! All told, the saga has been one of the more staggering examples of political impotence.

I’m sure you can think of other daft things we do to propel us to our cancer graves but we need to move on by noting that, aside from what we do to ourselves, the world we live in contributes something of a helping hand. Thus some useful foods nevertheless contain harmful substances and even the ground we stand on gives off low levels of radiation. And there’s not a lot we can do about such things.

… And are done to us

Then there’s heredity — the state of our DNA when we get it. It’s been clear for some time that mutations passed to us at birth kick off about 10% of all cancers (see for example, A Taxing Inheritance).

Way back in 1866 Paul Broca suggested it might be possible to inherit breast cancer. He’d looked at his wife’s family tree and noted that ten out of twenty-four women, spread over four generations, had died from that disease and that there had been cases of other types of cancer in the family as well. This large proportion was not, he believed, mere chance. Now we know that a changed (mutated) form of a gene (a unit of heredity), passed from generation to generation, was almost certainly responsible for the suffering of this family.

So broadly speaking there are two long-recognized categories that cause cancer —‘environmental’ and ‘hereditary’ — and, although we cheat by lumping things that we can control (e.g., smoking, eating too much red meat and sunbathing) into the ‘environmental’ camp (there should be a separate group: ‘stupidity’), many factors really are beyond our control.

As ever, it’s worse than that

Lurking in the wings for many years now has been a potential third cause that arises from a slightly tricky concept — namely the fact that our DNA, the genetic rock upon which all life is built, isn’t rock-like at all. In fact the chemistry of DNA makes it inherently unstable. Thinking about it from the viewpoint of evolution, of course it’s unstable: it has to be to permit change as new genes, and hence new proteins, are made and unmade — allowing life forms to advance. Think of it like close relationships: we’re fond of calling such things ‘permanent’, ‘unchanging’, ‘solid as a rock’ even. But they’re not: they change all the time, adapting to our shortcomings and to how individuals develop and mature.

With that in mind maybe it’s less surprising to find that DNA reacts with a wide range of chemicals, some that we consume but others arising from the natural reactions of the body — products of metabolism in fact. And then, speaking of shortcomings, there’s the truism that ‘nobody’s perfect’ and the realization that this applies to the mechanics of DNA replication as well as everything else. In others words, every time we make a new cell its DNA differs from the original. Cells have remarkably smart methods for correcting most mistakes made during replication but, inevitably, some get through and become fixed in the new genome.

Although ‘replicative mutations’ have been known for a while, nobody had come up with a way of measuring how much they contribute to cancers. Step forward Bert Vogelstein and Cristian Tomasetti at Johns Hopkins University with the idea of looking at ‘stem cells’ — cells that can divide to make more of themselves or to turn themselves into specialized cell types. They reasoned, bearing in mind that with every division there’s a risk of a cancer-causing mutation in a daughter cell, that if you knew the number of stem cells in an organ and you could estimate the total number of divisions over a lifetime, that might relate to cancer risk.

Indeed it did. In spades, because it turned out to account for two-thirds of all cancers. In other words, the majority of cancers arise because of cumulative mutations caused by internal agents.

Quick test to see if that fits with something we know: cancers of the intestine. Cancers of the duodenum (the first section of the small intestine) are rare compared with those of the colon (the large intestine, into which the duodenum empties). For 2017 in the USA the estimates are 1390 and 50,260 deaths, respectively – that’s about 0.2% of all cancer deaths versus 8% for colon cancers. Sure enough, Tomasetti and Vogelstein estimated the cell division rate to be about 100 times greater in the colon over a lifetime.

Mice, somewhat curiously, are the other way round: they pick up more cell divisions in their small intestine — and more cancers — than in their colon.

This correlation of rising risk with increasing number of divisions held over 31 different types of cancer — as it did when extended from USA to world-wide data, thereby eliminating a bias from environmental factors.

The upshot of this is that to ‘environmental’ and ‘hereditary’ factors we need to add a third category of ‘replication errors.’

Real-life examples of the impact of replicative mutations on lung cancer and prostate cancer.

About 90% of lung cancers are preventable: heredity plays no significant role and the estimate is that 35% of all driver (i.e. cancer-promoting) mutations are due to replication errors. For prostate cancers there is no evidence that environmental factors are significant and hereditary factors account for 5 to 9% of cases. The remaining 95% of driver gene mutations are estimated to be replication errors. None of these cancers are preventable. Clouds represent contributions from environmental factors. Gray dots: Environmental mutations, Yellow dots: Replicative mutations, Blue dots + H: Hereditary mutations (from Tomasetti et al. 2017).

 Causes of driver mutations in 18 types of female cancer (UK).

The colour codes are the same for hereditary (left), replicative (centre), and environmental (right) factors and are from white (0%) to brightest red (100%).

The left-hand schematic indicates that inherited mutations are not statistically significant in these cancers (note that Paul Broca’s findings related to fewer than 10% of breast cancers – the proportion we now know to be caused by abnormal genes passed from parent to child).

B, brain; Bl, bladder; Br, breast; C, cervical; CR, colorectal; E, esophagus; HN, head and neck; K, kidney; Li, liver; Lk, leukemia; Lu, lung; M, melanoma; NHL, non-Hodgkin lymphoma; O, ovarian; P, pancreas; S, stomach; Th, thyroid; U, uterus (from Tomasetti et al. 2017).

Controversial or what …?

It’s fair to say that the estimate of two-thirds of all cancers being down to internal faults was a surprise to many.

It has to be said that there’s a continuing debate about the precise numbers — not least because figures for cell divisions in some tissues aren’t available and also because of somewhat vaguer problems, e.g., to what extent to external assaults contribute to replication errors.

Nevertheless, it now seems clear that what Tomasetti and Vogelstein call “bad luck” can be blamed for a significant number of cancers. That’s good because knowing that it’s not your ‘fault’ may help some patients but we need to be wary of promoting that message too strongly in the media.

The fact is that, whatever the proportion might be that we can put down to “bad luck”, there are still a great many cancers that can be prevented.

What’s to be done?

Now we know what to blame we can return to the question of what can be done. It won’t take long because at the moment the answer is ‘not much’. The accumulation of mistakes from replication errors is random, so we cannot predict who will find themselves with a critical (i.e. cancer-producing) set. But that scarcely matters as we have no way of preventing them happening. So all we can do at the moment is deal with what presents itself with the treatments currently available, comforting ourselves that in the long-term things like gene–editing might enable us to rectify critical replication mutations.

So, like a lot of fascinating advances in the cancer field, the take-home message here is “that’s all very interesting but in the meantime we need to keep focusing on the possible: the fact that if we stopped smoking, got people to eat sensibly and gave everyone decent sanitation we could cut cancers by half”. Give or take a few percent!


Tomasetti, C. et al., (2017). Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention. Science 355, 1330–1334.

Tomasetti, C. and Vogelstein, B. (2015). Variation in cancer risk among tissues can be explained by the number of stem cell divisions. Science 347, 78-81.

Hares And Tortoises

You may have noticed that the last few months have seen a bit of a DNA-fest in these pages. Don’t blame me. It’s all the fault of them scientists beavering away in their labs. We’ve just done “Making Movies in DNA“, in “And Now There Are Six!!” the genetic code was expanded from four to six units by making two new ones artificially and in “How Does DNA Do It?” we saw how words can be transformed into a sequence of DNA.

Now they’re at it again – or at least Stephen Kowalczykowski, James Graham and colleagues of the University of California at Davis are – revealing yet more astonishing things about this molecule, just when you could be thinking we’ve got the hang of it.

I might add that I’m grateful to my correspondent David Archer of The Society of Biology for bringing this piece of work to my notice as I’d missed it in the journal Cell (cries of ‘shame’ and ‘shurely shome mistake’ mingle in the background).

What is it this time?  

Well it’s two really astonishing things about DNA replication – the process by which double-stranded DNA is pulled apart so that each strand can act as a template for making a new DNA molecule. Result: as cells progress towards division, they double their DNA content so that equal amounts can be given to each new daughter cell. The first source of amazement is that Stephen K & chums have filmed this happening in real time. That’s a terrific feat – but what it reveals is quite bizarre.

Up to now it’s been assumed that the protein machines (DNA polymerases) doing the biz trundle along each of the separated strands of parental DNA at more or less the same speed. It would seem to make no sense to do otherwise and risk ending up with the job half done. In other words, the duplication of the two strands is coordinated. Is that what K & Co found? Not a bit of it! Extraordinary to relate, it appears that there’s no coordination between the strands at all!! Not for the first time in the history of molecular biology a technical advance has thrown up the totally unexpected. Before we look at the results in a bit more detail, a little background might be useful.

One divides into two

Making two identical copies of DNA from one original happens every time one cell divides to make two. And there’s a lot of it about. As is well known, we all start out as one cell (i.e. a fertilized egg) that turns into a human being – 50 trillion cells (that’s 5 + 13 zeroes). And even after we’ve been assembled it takes a lot of cell-making to keep us ticking over – about one million new cells every second. Just take a second to think about that: DNA comes in the well-known form of a double helix – two strands made up of chemical units (called nucleotides) linked together. Each unit has one of four bases (cytosine (C), guanine (G), adenine (A), or thymine (T)) and the strands are “complementary” because C pairs with G and A with T – a rigid rule that means if you know the sequence of bases in one strand you can work out what it is in the other. So far so simple. But, as we noted in “How Does DNA Do It?”, the coding power of DNA lies in its size. In us three billion letters are available to do the encoding. That is, there are just over 3,000 million units in each chain – i.e. 3,000 million base-pairs all told. And all of these are copied (twice) for every new cell.

DNA replication: The double helix is ‘unzipped’ so that each separated strand (turquoise) can act as a template for replicating a new partner strand (green). This creates a ‘replication fork’ – two branches of single stranded DNA. The new strands are made by protein complexes called DNA polymerases chugging along the parent strands, making new, complementary, strands as they go. There’s a small technical wrinkle here: new DNA chains can only be extended in one direction. This means that, while one strand can be made continuously (the leading strand), the other has to be put together in short bits as the parent strand is unwound, with the bits being joined up afterwards (the lagging strand).



Timing is everything

So the cell’s task is to unzip the double helix and use each exposed strand as a template for building a new partner strand. Things are helped by DNA being split into fragments (chromosomes: 23 pairs in humans + 2 sex chromosomes, 46 per cell all told). Even so, chromosomes are huge: the longest (chromosome 1) has nearly 250 million base-pairs; the shortest (chr 21) has about 47 million. The problem for the machinery that has evolved for the job is that it cranks along at 50 pairs per second – roughly a month per chromosome. But in a normal cell cycle the whole business is done in about two hours! That’s made possible because replication doesn’t do the obvious: start at one end and work its way to the other. Cunningly it hits lots of ‘start points’ – up to 100,000 in a single cell – making lots of short bits at the same time that are then joined up. In other words replication proceeds simultaneously from many different sites in chromosomes. Enzymes join the pieces together to make the final, complete copy.

It’s rather like you having some horribly repetitive chore to do – washing up after a big dinner. On your own you might start at one end of the pile and work through it but, far better, get one member of the family to do the plates, another the cutlery, etc. and – job done!!

Now for today’s bit of amazing science

What Kowalczykowski and friends did was to extract DNA from bugs (E.coli bacteria in fact, that can make DNA about 20 times faster than human cells), set up a replication system and measure what went on by microscopy, using a dye (SYTOX Orange, which is fluorescent) that sticks to complete double helices but not to single strands. Thus they could track progress along a strand as a new double helix formed. What they saw was that each strand acted independently of the other. Overall, the rate of replication of the two strands was about the same (as it must be in the end) but along the way there were stops and starts and sometimes one strand would grow at ten times the speed of the other. How weird is that?!!

Seeing DNA being made. In this picture microscopy reveals three extending stretches of double-stranded DNA being made (Graham et al. 2017). Click here to see video.

You could picture DNA replication as one of those Swiss railway trains cranking up a mountain at an improbable angle, using a rack-and-pinion to stop it sliding backwards. Think of the engaging cogs as new base-pairs. The train just keeps chugging along until it reaches the its next stop. But why doesn’t the DNA-making machinery do the same? Well, we haven’t much of a clue. One difference is that the train has its track (and rack) laid out before it, whereas DNA is continuously being unwound to open the template. Some bits are more difficult to unwind than others and this variation may cause the system to go in fits and starts. Another contribution many come from the many proteins involved in this complicated process. As well as the polymerases there are things that unwind DNA, stabilize it, stitch new bits together, etc. and these complexes are continuously forming, falling apart and re-assembling – all of which gives plenty of scope for erratic behaviour.

Fact of the matter is, we don’t know. So, in revealing completely unexpected behaviour, this technical triumph throws up the question of how two strands working independently manage, in the end, to come up with the perfect finished product.

But hey! This wouldn’t be science if we had all the answers!


Graham, J.E., Marians, K.J., Kowalczykowski, S.C. (2017). Independent and Stochastic Action of DNA Polymerases in the Replisome. Cell 169, 1201–1213.