Sunday, 26 February 2012


Yersinia pestis holds the dubious title of the world's most devastating bacterial pathogen. While its glory days of the Black Death are thankfully a thing of the past, this pathogen remains a threat to human health to this day. A recent paper published in PNAS describes how the bacterium switches off the immune system in the lungs, going some way to explain why the pneumonic form of the Black Death is almost always fatal if untreated.

During the Middle Ages, the plague, or Black Death—so called because of the blackening of its victims' skin and blood—killed approximately a hundred million people across the world. In Europe, in particular, between thirty and sixty-percent of the population is believed to have perished. Although we now know that the bacterium responsible was transmitted by rat fleas, Europe in the Middle Ages was not known for having a sound grasp of science. Theories to explain the cause of the Black Death included a punishment from God, alignment of the planets, deliberate poisoning by other religions, or ‘bad air'. This final theory persisted for some time leading seventeenth-century doctors to don a bird-like mask filled with strong-smelling substances, such as herbs, vinegar or dried flowers, to keep away bad smells and, therefore, the plague.

While today we can cure the plague with antibiotics, historical treatments were as unreliable as the Middle Age's understanding of the disease. The characteristic swellings of a victim's lymph nodes were often treated by blood-letting and the application of butter, onion and garlic poultices. But such remedies did little to improve a victim's chances (even if it did make them smell delicious)—mortality rates varied between sixty and one-hundred percent depending on the form of the disease afflicting the patient. This led to the desperate population attempting far more extreme measures, such as medicines based on nothing but superstition including dried toad, or self-flagellation to calm their clearly angry gods.

The three predominant forms of the disease were described by a French musician named Louis Heyligen (who died of the plague in 1348):

"In the first people suffer an infection of the lungs, which leads to breathing difficulties. Whoever has this corruption or contamination to any extent cannot escape but will die within two days. Another form...in which boils erupt under the armpits,...a third form in which people of both sexes are attacked in the groin."

So anything involving the words "attacked in the groin" is clearly a bad thing. But these three forms of the plague come in different flavours of "bad". Of the three, bubonic plague with its unpleasant boils and swellings is the least fatal, killing around two-thirds of those infected. Whereas bubonic plague spreads throughout an infected person’s lymphatic system, septicaemic plague is an infection of the blood-system and is almost always fatal. The final form, the rarer pneumonic plague, also has a near one-hundred percent mortality rate and involves infection of the lungs, often occurring secondary to bubonic plague and capable of being spread from person-to-person.

One of the most interesting aspects of pneumonic plague is that the first 36 hours of infection involve rapid multiplication of the bacteria in the lungs but no immune response from the host. It is as if the immune system simply doesn’t notice the infection until it is too late to do anything about it. This ability to replicate completely beneath the immune system’s radar makes Y. pestis unique among other bacterial pathogens and a group from the University of North Carolina recently attempted to shed some more light on how Y. pestis achieves this feat, publishing their findings in PNAS.

So is Y. pestis's success down to a) an ability to hide from the immune system, or b) a deliberate suppression of the normal host response to a bacterial infection? To answer this question, the scientists coinfected mice with two strains of Y. pestis—one capable of causing plague in mice and one which is usually recognised and cleared by the immune system. If the bacteria are capable of modifying the conditions in the lung for their own benefit, it should be possible for a non-pathogenic mutant of Y. pestis to survive when co-infected with a virulent strain.




And this is exactly what the scientists found. In the above image, the green bacteria would normally be cleared by the immune system but, in the presence of the pathogenic red strain, they are able to survive. This suggests that the pathogenic Y. pestis is actively switching off the immune system, establishing a unique protective environment that allows even non-pathogenic organisms to prosper. The authors went on to show that this effect isn't limited to strains of plague—other species of bacteria not usually able to colonise the lung can also replicate unperturbed when present as a co-infection with Y. pestis.

Part of this immunosuppressive role is carried out by effectors injected into the host cell by a type III secretion system—a kind of bacterial hypodermic needle. But this isn’t the only mechanism involved and, unfortunately, determining exactly how Y. pestis establishes the permissive environment is proving difficult. The authors of the PNAS paper attempted to use a commonly used method to investigate which Y. pestis genes are vital for an infection to progress. TraSH screening is a really clever method which involves infecting an animal model with large pools of gene mutants and determining which mutants are lost over the time-course of the infection. In other bacterial species, it is every bacterium for itself and mutants with a defect in virulence fail to survive in the animal model, giving an insight into which genes are vital for infection. But this does not work well for Y. pestis due to the ability of virulent mutants to permit the growth of impaired mutants that, alone, would be unable to cause disease.


Screening for genes involved in infection - an animal model is infected with a pool of single mutants. Those mutants lost during infection are identified and the mutated gene used to learn more about what is required for an infection. This method does not work well with Y. pestis as the attenuated mutants can survive in the permissive lung environment created by the other mutants despite not being able to create this environment on its own.






Part of the modern-day interest in pneumonic plague is, unfortunately, the result of a human rather than a natural threat—bioterrorism. The Black Death bacterium has an unpleasant history of use as a weapon. As far back as 1346, the Tartars catapulted plague-ridden corpses over the city walls of their enemies and, unfortunately, as technology and science advanced, so did our abilities to use deadly-diseases against our enemies. During World War II, the Japanese dropped bombs containing plague-infected fleas on Chinese cities, and the Cold War saw both America and the USSR develop aerosolised Y. pestis. One of today’s concerns is that we don’t know what happened to all the weapons research carried out in the USSR, meaning that weaponised, antibiotic-resistant Y. pestis must be considered a potential bioterror threat. So understanding how the plague bacterium causes disease in humans is vital for the future development of new treatments and vaccines. And it is also a really interesting pathogen due to its unique way of ensuring it survives long enough in the host to be transmitted to other unfortunate victims.

Tuesday, 21 February 2012

There’s a lot in the news at the moment about a little boy who has been diagnosed with Gender Identity Disorder and is now living as a girl. I can’t quite decide how I feel about this. Part of me thinks it is awesome that his parents and teachers are being so supportive—god knows we could do with a bit more understanding when it comes to adults who identify with the opposite gender to the one their chromosomes dictate. But there’s another part of me that is: a) hugely disturbed about what the parents’ motives are in plastering this five-year-old all over the newspapers and internet, and b) worried that too much emphasis is put on a person being either ‘male’ or ‘female’, especially at such a young age.

Despite what certain media reports might tell you, there is no such thing as a ‘male brain’ or a ‘female brain’. The truth is, no one really knows how our minds decide to associate with one gender or the other—is it physical, or chemical, or psychological, or a mixture of all three? Our entire personality certainly isn’t a product of our genes, so why are we so fixated with this idea that we are born a certain, fixed way when it comes to gender identity? Most people would be furious to be told that their upbringing and experiences have had no effect on their personalities—of course we don’t arrive on Earth with all our views and personality quirks preformed. Yet, when it comes to complicated and controversial topics such as gender identity, many seem determined to relinquish all control over something so integral to who we are as a person. Of course there might be a biological or chemical cause(s) for Gender Identity Disorder–but can we honestly say cultural gender definitions play no role? 

I think my big problem comes down to society’s definitions of what makes a girl and what makes a boy, as if the two are set in stone. You don’t like playing with dolls? Yeah, you’re male. You like talking to people and are great at empathy? Ohhh, such a girl. It’s ridiculous. Especially when there is no evidence that traits such as these are intrinsically ‘male’ or ‘female’. Whenever there is a perfectly reasonable scientific study into the physical characteristics of the brains of men and women (some brain disorders have much higher rates in a particular sex, meaning we can’t ignore these differences), certain non-scientists insist on using the data to make sweeping generalisations about the sexes that reinforce stereotypes and are simply not backed up by the science. In reality, many of these supposed scientifically–supported gender differences are completely mythical.

Let’s start with the old favourite ‘brains develop differently in girls and boys’. A school in Florida is not unique in its support of single sex schooling, and backed up their policy with:

‘‘In girls, the language areas of the brain develop before the areas used for spatial relations and for geometry. In boys, it’s the other way around.’’ and ‘‘In girls, emotion is processed in the same area of the brain that processes language. So, it’s easy for most girls to talk about their emotions. In boys, the brain regions involved in talking are separate from the regions involved in feeling.’’

Is there any real scientific evidence for this? Nope. Turns out the early studies that led to this hypothesis have not been backed up by more detailed analyses. Yet so many people persist with the idea that ‘boys are better at maths, girls are better at emotions’ as if it is a known fact—and this ‘fact’ has made it’s way into policies that effect how kids are educated! And all that ‘girls develop faster than boys’? Yeah, that’s not backed up by the evidence either. Despite widespread beliefs, neuroscientists do not know of any distinct ‘male’ or ‘female’ circuits that can explain differences in behaviour between the sexes.

So basically studies into brain structure have yet to identify any specific difference between the brains of the two sexes that leads to a specific difference in behaviour. Yet boys and girls do behave differently if we take an average over an entire population. (And, yes, I realise averages are rubbish when it comes to making judgements on an individual level). Let’s use one of the most obvious and earliest differences as an example—appreciation of the colour pink. Was I to stick all Britain’s little girls into one blender and all the boys into another, the former mixture would average out at a pink colour with a sprinkling of hearts and ponies, and the latter would be camouflage with a shot of train fuel and maybe a gun poking out the top.

If there is no proof for the existence of a defined, biologically male or female brain at birth, how do we explain the differing colours of our average-child-smoothies? There's always the issue of what hormones we are exposed to in the womb or after birth, but could it also be that sex differences are shaped by our gender-differentiated experiences? Perhaps small differences in preferences become amplified over time as society, either deliberately or not, reinforces traditional gender stereotypes (Yay, my little boy kicked a ball—sports, sports, sports! Oh, he tried on my high heels? Yeah, let’s just ignore that). How much of our gender identity is truly hardwired into our brains from birth and how much is culturally created?

This is why I have a problem with the little boy diagnosed as ‘a girl trapped in a boy’s body’ that I mentioned at the start of this rambling monologue. By trying their best to define him as a ‘girl’ rather than as an individual, the parents and school are doing the exact same thing that they were trying to avoid—attempting to fit him into a gender-shaped box which, in reality, few people truly belong in. In the end, my own opinion does come down on the side of those trying to support this child (but not with the asshats using her to make money), but I am concerned that they are swapping one rigid set of gender rules for another. There's a lot more to being a woman than occasionally wanting to be a princess and surely a five-year-old has a long way to go before they can be accurately pigeon-holed, if at all.

In my perfect world, children would be allowed to experiment without anyone making any judgements or diagnoses (why do we need a medical term to make it acceptable for a small child to play around with wearing a dress, or growing their hair long?). That way, when they were mature enough, they would be free to make a balanced and personal decision on who they want to be and how they can best fit in with the rest of the world, including with our culturally defined ideals of gender.

Understanding how differences between the sexes emerge has the potential to tell us so much about the nature-nurture interaction, and could help us understand why some people associate so strongly with the opposite sex. But, unfortunately, it is open to careless interpretation by the media and public, who seem determined to use it to reinforce the gap between men and women rather than to tell us more about what shapes each of us a person. 

Further reading:
This is a really interesting article on neurological sex differences pulished in Cell by the author of Pink brain, blue brain: How Small Differences Grow into Troublesome Gaps – and What We Can Do About It, and some feminist perspectives on Sex and gender and trans issues.

Monday, 20 February 2012


I have a slight obsession with the sewers, which I don’t think is entirely normal or healthy. It’s the architecture more than the sewage itself but, as it happens, this post concerns the latter. Our tour of interesting things poo-related starts in London of 1858 and a period of history known as the Great Stink.

The first half of the 19th century saw the population of London soar to 2.5 million and that is a whole lot of sewage—something like 50 tonnes a day. It is estimated that before the Great Stink, there were around 200,000 cesspools distributed across London. Because it cost money to empty a cesspit, they would often overflow—cellars were flooded with sewage and, on more than one occasion, people are reported to have fallen through rotten floorboards and to have drowned in the cesspits beneath.

Sewage from the overflowing cesspits merged with factory and slaughterhouse waste, before ending up in the River Thames. By 1958, the Thames was overflowing with sewage and a particularly warm summer didn't help matters by encouraging the growth of bacteria. The resulting smell is hard to imagine, but it would have been particularly rich in rotten egg flavoured hydrogen sulphide and apparently got so bad that the House of Commons resorted to draping curtains soaked in chloride of lime in an attempt to block out the stench and even considered evacuating to a location outside the city.

At the same time, London was suffering from widespread outbreaks of cholera; a disease characterised by watery diarrhea, vomiting and, back in the 19th century, rapid death. But no one really knew where cholera came from. The most widely accepted theory was that it was spread by air-borne ‘miasma’, or ‘bad air’. Florence Nightingale was a proponent of this theory and worked hard to endure hospitals were kept fresh-smelling and that nurses would ‘keep the air [the patient] breathes as pure as the external air’. However, when it came to cholera, this theory was completely wrong.

A doctor called John Snow was one of the first people to suggest that the disease was transmitted by sewage-contaminated water—something of which there was a lot in 19th century London. Supporting his hypothesis was the 1854 cholera outbreak in Soho. During the first few days, 127 people on or near Broad Street died and, by the time the outbreak came to an end, the death toll was at 616 people. Dr Snow managed to identify the source as the public water pump on Broad Street and he convinced the council to remove the pump handle to stop any further infections (although it is thought the outbreak was already diminishing all by itself by this point).

From a 19th Century journalist on the problem of cholera in London:
A fatal case of cholera occurred at the end of 1852 in Ashby-street, close to the "Paradise" of King's-cross - a street without any drainage, and full of cesspools. This death took place in the back parlour on the ground floor abutting on the yard containing a foul cesspool and untrapped drain, and where the broken pavement, when pressed with the foot, yielded a black, pitchy, half liquid matter in all directions. The inhabitants, although Irish, agreed to attend to all advice given to them as far as they were able, and a coffin was offered to them by the parish. They said that they would like to wait until the next morning (it was on Thursday evening that the woman died), as the son was anxious, if he could raise the money, to bury his mother himself; but they agreed, contrary to their custom on such [-55-] occasions, to lock up the corpse at twelve o'clock at night, and allow no one to be in the room. On Friday, the day after death, the woman was buried, and so far it was creditable to these poor people, since they gave up their own desires and customs, which bade them retain the body.

George Godwin, 1854 - Chapter 9, via http://www.victorianlondon.org/index-2012.htm

The London sewage problem was finally addressed by the introduction of an extensive sewage system overseen by the engineer Joseph Bazalgette. In total, his team built 82 miles of underground sewers and 1,100 miles of street sewers at a cost of £4.2 million and taking nearly 10 years to complete.

London sewer system opening - via bbc

We now know that cholera is caused by a bacterium called Vibrio cholerae. In order to become pathogenic to humans, the originally environmental bacterium needs to acquire two bacteriophages (viruses that integrate into the bacterium’s genome)—one that provides the bacterium with the ability to attach to the host’s intestinal cells and one that leads to secretion of a toxin that results in the severe diarrhea associated with this disease.

Now I don’t often get teary-eyed at scientific meetings but, several years ago, a lecture by a guy called Richard Cash made me remember why I’d got into science in the first place. See, cholera is a disease which kills around 50-60% of those infected (sometimes within hours of the first symptoms) but with treatment, the mortality rate drops to less than 1%. And the reason that this disease is now almost completely curable is down to Professor Cash. The problem with cholera is that a patient can lose something like 20-30 litres of fluid a day and death occurs due to dehydration. So Cash and his team came up with an unbelievably simple solution—replace the patient’s fluid and electrolytes as quickly as they are lost. Oral rehydration therapy is a solution of salts and sugars, and is thought to have saved something like 60 million lives since its introduction. Patients who would have died within hours can now make a recovery within a day or two. Awesome, right?


Today, we tend to hear of cholera mainly when it is associated with natural disasters where contaminated water can spread disease throughout a region where the infrastructure has been severely compromised. One of the most recent outbreaks occurred nearly a year after the Haiti earthquake—cholera left over 6,00 dead and caused nearly 350,000 cases. But, prior to the outbreak, Haiti had been cholera-free for half a century. So where did it come from?


Image available from Wikipedia commons

I mentioned earlier that cholera can result from an environmental strain of bacteria acquiring the phages encoding virulence factors. But, unfortunately, the Haiti outbreak was actually brought into the country by the people trying to help rebuild following the earthquake. By comparing the DNA sequence of the outbreak strain with strains known to infect other parts of the world, it was possible to narrow down the source of the outbreak to Nepal. And UN peacekeepers from Nepal were known to be based near the river responsible for the first cases. It is highly likely that it was one of these soldiers who brought the disease to Haiti and this case demonstrates how quickly cholera can spread if gets into the water system. Lessons learnt from this outbreak will hopefully lead to visitors from cholera-endemic countries being vaccinated before travelling to post-disaster areas, even if they are showing no sign of the disease. After all, something close to 3 in 100 patients remain asymptomatic after infection.

The biggest obstacle in the way of eradicating cholera today is poor sanitation leading to contamination of drinking water. In some parts of the world, the link between hygiene and disease prevention is not as obvious as it is to us in the Western world. Cholera isn’t a disease which requires complicated drugs or vaccines to prevent—washing hands with soap, avoiding contact with human waste, and clean drinking water would make all the difference. 

Friday, 17 February 2012

I went to a birthday gathering in a pub the other day to which someone had brought along the game Jenga. Putting aside any conclusions you may want to make as to just how exciting it must be to party with my friends and me, the game actually illustrates an interesting point about evolution. Sort of. 

The idea of Jenga is that you stack up these little sticks of wood and, taking turns, pull out the pieces one at a time in the hope that you won’t collapse the entire tower. If you’re very careful (and haven’t had more than one pint), it is possible to strip down the tower to the bare minimum of pieces that are required to keep it upright. But pick one of the essential load-bearing pieces and the whole thing comes crashing down on top of everyone’s drinks.

And, in a way, evolution is playing Jenga with our genes.

Jenga - image from Wikipedia Commons


You’d think that, after millions of years, our genomes would be stripped-down, streamlined collections of only the DNA we require to be us; nothing more, nothing less. This hypothesis is backed up by the fact that almost all the genes in eukaryotic genomes are conserved—this means that they are found across many species and have persisted in the population for far longer than you’d expect if they weren’t absolutely necessary for survival. The loss of non-essential genes can actually be seen in many parasitic species. The leprosy bacterium, for example, is a much reduced version of the microbe which causes tuberculosis. It has lost around half of its genes because it doesn’t need them anymore.

But here's the problem: scientists have known for ages that it is possible to delete many of the genes found in eukaryotic organisms with no noticeable effect. So a group at the University of Toronto decided to address the question of whether the C. elegans worm really needs all its genes, and their work was recently published in Cell.

C. elegans - Image is from Wikipedia Commons.
The method used by this group was especially clever because, instead of deleting single genes and looking at whether the worm survives, they tested the effect of gene loss over several generations and in competition with other worms. After all, this is what happens during evolution—survival of the fittest and all that. The basic method showcased in this paper used something known as RNA interference to knock-down the effects of a certain gene (RNA interference literally interferes with the synthesis of a protein by sequestering away the mRNA recipe before it can give the cell any instructions).

The scientists mixed those worms in which a gene had been knocked-down with the original worms. If the gene being tested proves to be vital, the knocked-down worms will be lost over successive generations due to competition with the original, fitter worms. And, fitting with the idea that we (and by ‘we’ I am referring to all eukaryotes including worms; some people are more worm-like than others, though) only have the genes we need to survive, nearly all the genes in C. elegans were found to impact fitness when knocked down.

This is not what was suggested from all the experiments in which it was found that single genes could be deleted without any obvious effect on the organism. The explanation is probably that different genes play a role under different conditions. This would mean that it might be possible for one gene to be deleted in the laboratory but, were the mutant to be let out into the big wide world, with all its various stresses and challenges, it would be seriously impaired in its survival.

Interestingly, many more genes are found to be essential when this method is used in C. elegans than are identified by similar experiments in yeast. The authors of this paper suggest that this is down to selective pressures being very different for single and multi-cellular organisms. Whereas something like yeast only has to deal with one environmental condition at one time, a multi-cellular organism is forced to juggle the needs of lots of different cell types which are all under different pressures of their own. A multi-cellular creature is far more complex than a unicellular organism and the genes required are therefore more finely tuned. A little like playing Jenga on not just a tower but an entire city and…OK, the analogy is collapsing all around me so I am going to give up and have a drink instead.

Sunday, 12 February 2012


Like animals, plants can be infected by a range of pathogenic organisms. And, like animals, plants possess an immune system to fight off attacks from pathogens. The plant immune system is analogous to the innate immune system in higher eukaryotes but does not involve mobile immune cells such as macrophages. Instead, it is every cell for itself when dealing with a potential infection.

The plant innate immune system recognises molecules common to groups of infecting microorganisms known as microbe-associated molecular patterns (of MAMPs). When surface receptors bind these MAMPs, the immune response responds in a non-specific manner—for example, by inducing production of antimicrobial agents that can protect other parts of the plant, or by initiating cell death in order to prevent spread of an infection.  

Successful pathogens, however, have ways to get around the initial immune response. By injecting effector molecules into the plant cell, they are able to interfere with the cell’s ability to mount an effective response. This has led to the evolution of a second branch of the innate immune system in plants which recognises a pathogen’s effector molecules once they get inside the cell, or responds to the downstream effects of these effectors on the plant cell.
The two branches of the plant innate immune response to a pathogen.

The innate immune system is on the front-line in a plant’s battle against infection, so it needs to be extremely good at recognising invading pathogens. Despite the importance of the innate immune system’s ability to recognise threats, little is known about the range and diversity of the MAMPs capable of triggering the immune response. 

So what makes a good MAMP? Because of the non-specific nature of the innate immune response, it is impossible for the receptors on a plant cell to recognise every protein found in every pathogen. Therefore, the immune system focuses only on those proteins which are commonly found in a range of infectious organisms—these tend to be important proteins with a vital function across many species. But a pathogen has its own ways to avoid being recognised and subsequently killed. One method is to vary those proteins that are recognised by the host’s immune system so that they are no longer detected. For this reason, proteins are under strong positive selective pressure to diversify—natural selection will lead to the evolution of pathogens possessing mutated proteins that are no longer recognised by the host’s immune system. However, the more important a protein, the less likely that a random mutation will be tolerated. So vital proteins are also under strong negative selective pressure to maintain their function.

This paradoxical situation results in different regions of an immune system-recognised protein being under either positive or negative selective pressure depending on whether a mutation in that region disrupts host recognition or destroys protein function. By identifying proteins with this particular pattern of positive and negative selection, a group at the University of Toronto searched the genomes of a number of plant pathogens for potential elicitors of innate immunity and their work was recently published in PNAS.


After screening the genomes for potential immune response elicitors, they synthesised the corresponding peptides and inoculated them into A. thanliana (a species of cress commonly studied by plant biologists). These plants were then challenged with a pathogen to determine if the peptides could suppress virulence, indicating that they had triggered the innate immune response.

In total, the researchers found 55 new peptides capable of switching on the innate immune response. It is hoped that this work will give an insight into how co-evolution of plants and their pathogens has occurred. In addition, understanding how the plant innate immune system works could make it possible to synthesise new antimicrobial agents capable of transiently protecting plants from pathogens, or even to genetically engineer improved plants with better disease resistance.

Wednesday, 1 February 2012

To understand why infectious diseases make us ill, it helps to consider disease from the pathogen’s point of view. Bacteria, viruses and parasites did not evolve simply to cause illness and suffering; virulence is simply a by-product of a pathogen’s fight for survival. Because an infectious agent which incapacitates its host before it has had the chance to be transmitted is an evolutionary dead-end, the key factor for survival is in striking the correct balance between transmissibility and virulence. It's a numbers game—a pathogen needs to divide in sufficient numbers to overcome the efforts of the host’s immune system long enough to ensure that it will be transmitted to a new host. But exploit the host too much, and there is the risk that the pathogen will be left homeless.

Pathogens have evolved various solutions to this paradoxical situation. Mycobacterium tuberculosis, the bacterium responsible for TB, has been infecting humans for thousands of years and it has evolved to be extremely good at it. Because this disease first emerged when we lived in isolated communities, M. tuberculosis became adept at asymptomatically infecting as many people as possible for extremely long periods of time, causing active disease in only a proportion of those infected. In this way, M. tuberculosis ensured its prehistoric hosts would survive long enough to encounter other humans to which they could spread the disease.

The waiting game works for pathogens like M. tuberculosis, where close contact between hosts is required for transmission. But a disease such as malaria which is spread by a secondary vector can afford to make the host much sicker and still guarantee the infection can be passed on to others. Diarrheal diseases such as cholera can be similarly highly virulent. In this case, the infection is spread via contaminated water, meaning that the bacterium responsible can be transmitted even when it replicates in the host at such high levels that they rapidly succumb to the infection and die.

Thinking about how pathogens evolve to ensure their own survival led me to this recent paper published in Scientific Reports. This work is interesting in that the authors consider the role of host evolution as well as that of the pathogen in determining disease outcome. In the case of highly virulent infectious agents, is the rapid death of the host something which might be beneficial to a population on the whole?

The idea behind this hypothesis is that, if a member of the host population dies immediately upon infection, they can protect the rest of the population from secondary infections. The team from the University of Tokyo investigated the infection of Escherichia coli with a bacteriophage lambda. They used a mixed population of E. coli containing an altruistic host, which commits a bacterial version of suicide as soon as it is infected, and a susceptible host, which permits multiplication and transmission of the phage. When these strains were infected in a structured habitat in which contact between hosts was limited to close neighbours, the presence of the altruistic hosts protected the overall population from being overcome by the infection.

The researchers also observed the emergence of phage mutants that could bypass the altruistic host suicide mechanism. By not killing the host, these random mutants ensured that they could be passed on to other bacterial cells and guaranteed their own survival.



Presence of altruistic hosts which commit suicide upon infection protects entire population including susceptible hosts


This work demonstrates that virulence has not evolved as a result of the pathogen alone, but is influenced by the interaction between the host and the pathogen. In a way, this represents an ‘arms race between pathogen infectivity and host resistance.’ The pathogen will favour lower virulence in order to maintain a sustainable symbiosis, while the host population as a whole benefits from high virulence even though individuals die as a result.

Suicidal defence has previously been described in multi-cellular organisms, where infected single cells are rapidly destroyed to prevent spread of the infectious agent throughout the entire organism. Taking these findings and attempting to extrapolate the data to make conclusions about how human evolution has shaped pathogen virulence is perhaps taking things too far. The huge difference in the growth rates of a human and a bacterium means that the majority of the evolutionary contribution to this particular arms race is from the bacterium’s side. However, this kind of study does show that, where the survival of two organisms is so intertwined, we cannot consider one without taking the involvement of the other into account.

Evolution may move too slowly for humans to compete with pathogens in this way, but the environmental changes that we make have a huge impact on the ability of bacteria and viruses to infect us. This has already been observed in the case of cholera. As improvements in sanitation become more widespread, highly virulent strains are disappearing. This is because those strains which incapacitate the host very rapidly can no longer be as easily passed on, meaning that less virulent strains that do not kill so quickly have the advantage.

It is also interesting to think about how our modern way of life can contribute towards creating epidemics. For example, the bird flu threat would not be quite so concerning if it wasn’t for air-travel providing the potential for any emerging epidemic to spread around the entire world. A highly virulent pathogen is likely to be fairly short-lived unless it has a way to spread very rapidly to a large number of hosts. Take Ebola for example—one of the world’s most deadly diseases, yet outbreaks can be confined to relatively small areas and burn out quickly. In this regard, Ebola is actually a fairly unsuccessful pathogen. M tuberculosis, on the other hand, remains a global issue predominantly due to its ability to infect a huge proportion of the population without causing rapid death of the host. In this case, patience can pay off for a pathogen.