Friday 26 October 2012



I recently finished a month-long British Science Association Media Fellowship, spending three weeks at Nature and one week at the British Science Festival in Aberdeen. I’ve talked more about my thoughts on this experience at the Wellcome Trust blog.

I’m now left wondering what on Earth I am going to do with all my newfound skillz. See, I exist at a leisurely gastropod-like pace, whereas the news media seems to be more of a fast-moving cephalopod. Science=three year deadlines that can meander off in an unexpected direction at any point; news=short window until it’s too old for anyone to really care. So using this blog to write about scientific advances (my pre-placement plan) is a pretty stupid idea unless I’m going to add something that isn’t already being said faster and better by a professional news outlet. Mollusc-based metaphors, sadly, aren’t quite enough; I think I’m going to have to develop opinions. We will see how that turns out.

Anyway, I’d recommend applying for a Media Fellowship to any scientists who are interested in how the news works. Then, you too, can be plunged into a metaphysical quandary about your place in the science media world.

Oh, and here’s my big self-aggrandising list of things what I wrote while on my media placement:

Nature News articles and blog posts:

Scientists do the wet dog shake
Nerve growth factor linked to ovulation
Helium reveals gibbon’s soprano skill
There are fewer microbes out there than you think
Resistance to backup tuberculosis drugs increases
Hepatitis C drug trial halted after patient death
Photosynthesis-like process found in insects

Research Highlights and News in Brief:

Rodent that cannot gnaw
Infection breaks truce
Inflamed guts boost bad bacteria
Cigarette smoke boosts biofilms
Hepatitis C halt
Resistance warning

Wellcome Trust Blog articles:

From growth media to news media
No such thing as a stupid question
When the drugs don't work

British Science Association website:

Stereotypes form by ‘Chinese whispers’
Sex and sewage
Cows and cars
Sensing hidden oil reserves
Shock – balanced diet is healthy!

Image: Neurons in the brain – illustration. Benedict Campbell. Wellcome Images

Saturday 22 September 2012


Kate Middleton’s boobs are everywhere, both in the flesh and spirit. That sounds like the plot of a horror film in itself (“Nooooo, we’re surrounded, I’m suffocating!”). But what I really want to bring to your attention is how boobs are somehow capable of turning normal, rational people into adipose-for-brains morons. Not coming to a cinema near you - Attack of the Insidious Mind-Control Mammaries.

What is this post all about?, I here you ask. To answer that question, let’s go back more than fifteen years to when Kat (that’s me, hi everyone!) was an immature, average-looking, shy/slightly weird fourteen-year-old. Six months later, young Kat looked and acted pretty much the same apart from two not-so-small changes. And, with the materialisation of those breasts, there came a complete annihilation of teenage Kat’s faith in humanity.

Men old enough to be her father asking her out and getting angry when she politely declined, van-drivers slowing down to shout suggestions of what they wanted to do to her breasts, every tenth car holding down their horns as they passed her, wolf-whistles, strangers staring at her chest while licking their lips or nudging their mates, men ‘accidentally’ touching her on the train. Every day, all the time, until it gradually tailed off when she was around 22.

Friends were jealous, her mother answered her unease with “you should be flattered!”, other women gave her dirty looks when she inspired explicit requests from men. And it wasn't sexual harassment or, on the occasions when their hands slipped or they tripped or whatever, sexual assault. Those things were dark and menacing, confined to shadowy alleyways and drunk girls stumbling alone out of seedy bars in the early hours. Not something constant and blazon and backlit by bright daylight.

Now here's where I attempt to make my point in an oh-so-clever way. Here's a quick test: reading the above, did you think a) wow, that's shitty, b) stop exaggerating, that's ridiculous, or c) wow, it must be hard being soooo pretty. Stop boasting and get over yourself?

See my big problem isn't really with the men who thought nothing of propositioning the teenage me. What makes me unbelievably sad is all the nice, reasonable, respectful people who roll their eyes when I try to explain that there's something very wrong with society's attitudes to women. We are constantly bombarded with images of female sexual availability - advertising, music videos, daily tabloids that celebrate male achievements and female mammaries. The message is that it's OK to ogle women's bodies in public, that secretly photographing celebrities without their clothes is something that they kind of asked for when they became famous, that treating women like sexual objects when they clearly don't want to be treated that way is perfectly fine.

"So don't read The Sun," people say, failing to realise that even they have come to accept the objectification of women as not a big deal. The kind of harassment I experienced has been normalised to the point that people judge me as arrogant or boasting or, at best, overly-dramatic or a boring feminist when I try to talk about it.

I look around my train and try to work out which of the normal, polite suited-up business men, were I still a teenager, would be the ones to lose their self-control and start aggressively hitting on me. And which of the better behaved men and women would think I'd somehow asked to be harassed or would just accept it as normal.

Trying to explain to a non-believer how the over-sexualisation of women in the media harms us all is on a par with those little photos of rotten internal organs on cigarette packets. Smokers ignore the pictures, non-smokers such as myself get all worked up about all the passive smoke we may once have inhaled back when it was acceptable for people to exhale toxic chemicals on their friends. The people who need to be convinced aren’t even listening.

But when you tell me that it is natural for men to want to look at breasts, or 'jokingly' ask why I am such a man-hating feminist who wants to ban sex, or inform me that underwear adverts objectify men and it is no different, or roll your eyes and just change the subject, to me it sounds like you're basically saying that you don't think I have any reason to be upset and that I should just 'take the compliment'. That you think it's fine for certain men to treat a woman's breasts as if they are public property - whether they're a fifteen-year-old girl or the future queen.

The image at the top is borrowed from the awesome Indexed. Go check her other charts and stuff, they are very cool.

Sunday 26 August 2012

I made the worst decision of my life the other day and think the guilt will linger for at least another decade. You know those little parks you can go to which are liberally scattered with ducks? Ducks on all the ponds, ducks mingling with the sheep and random alpacas, ducks pestering visitors for food in the picnic areas? Everywhere.

So we visited one of these parks and happened upon a lost duck who’d managed to get itself separated from its flock. On one side of a small fence were ducks floating on a tranquil little pond; on the other side, quacking forlornly and pacing back and forth, was the duck in question.

“Ha ha, why doesn’t it just fly back over?” I said.

“Because it’s a duck and it’s stupid,” the boyfriend said.

Well I’d grown up with pet ducks and chickens (not literally with them. I was raised by actual humans. In a house). But, yes, I did have to agree with the boyfriend that ducks really are very stupid. Once, a fox got into our open-roofed pen and, instead of flying away, the vast majority of our ducks let themselves get eaten. See, not the cleverest members of the avian race.

So we hit on what, in retrospect, is clearly a hideously bad idea. I picked up the stupid duck and gently threw it back into the enclosure. Now, normally when you throw a duck, they flap to the ground. This one didn’t. It kind of crash-landed in the mud.

Hmmmm, I thought, its wings are clipped. Maybe it wasn’t actually meant to be in that particular enclosure…

We watched on with mounting horror as the duck metamorphosed from a cute little creature into Duckzilla the dictator duck from hell. It stormed onto the water and set about trying to KILL one of the innocent residents of the enclosure, swinging it around by the neck and basically trying to force it beneath the concentric circles of a watery doom.

“Oh no,” said the boyfriend. “What have we done?”

Before you get too upset, nothing actually died. Although I think the brutalised Mallard did look a little depressed once it had been released and had recovered from its ordeal enough to return to paddling around dibbling its beak in the (feather-strewn) water. Dictator duck then proceeded to chase the female ducks around, doing an impression of a drunk dude in a cheesy nightclub. And I have been left with the lingering guilt of knowing I have sentenced that whole flock to live under the rule of the avian reincarnation of Josef Stalin.

As I should have remembered from the childhood trauma of witnessing what tended to happen when a new duck is introduced to a flock, birds have what is cleverly termed a ‘pecking order’. This ultimately allows peaceful coexistence of everyone in the flock but, at first, there can be a bit of a power struggle while all the ducks work out who is the toughest, meanest duck that gets to boss all the others around.

See, ducks and chickens, while not being particularly intelligent as species go, do have their own little personalities. We had this one pet chicken that, over its ridiculously optimistic 15 year life, resolutely remained the grumpiest inhabitant of the hen house. It hated everything and everyone and, while all the other chickens would let me pick them up and carry them around, this one would peck anything that came close to touching it. Nothing messed with this chicken. Not even Death, it would seem, considering the fact that it managed to live nearly as long as the world’s oldest hen. This chicken was born mean and it died mean, maintaining a remarkably stable personality for all those years.

But other chickens are more pathetic. My parents had this thing for rescuing battery chickens and every year, they would introduce a few featherless, twitchy birds into the pen and we’d watch with crossed fingers to see how they’d fit in with the rest of the flock. Occasionally, there’d be one that, to heap more trauma on top of its already miserable existence, would get pecked so horribly that it would have to be separated from the others until they’d all got used to each other through a chicken-wire barrier. But, in the end, everyone would learn to get on with everyone else, and the battery chickens would grow back their patchy feathers and be less disturbing to look at.

All this has got me thinking about what kind of chicken I am. Do others size me up upon first meeting me and work out that I am very unlikely to peck them back if they try to pinch my choicest vegetable peelings? Am I destined to live out my own life being pushed around by others or can a chicken better its position in the social hierarchy? More importantly, why am I attempting to analyse my own personality based on chickens?

Giving me some hope that we don’t always need to accept our lot in life is an ambitious experiment currently being performed by my slightly mad parents. Chickens can’t exactly fly, providing a good example of how evolution can work in both directions, removing a previously successful adaptation from a species that no longer needs to use it. But my parents are attempting to teach their ex-battery hens to take to the skies using the motivation of grapes dangled from a great height. So far they’ve had moderate success although the chickens’ eyes weirdly roll over white whenever they jump, which is both strange and slightly terrifying to witness.

I think it is close to a metre off the ground. Chickens really like grapes.

Tuesday 21 August 2012

 
A 500-word news article on a research paper and two days to write it. You'd think it would be simple. Yeah, right. One week into my first foray into the world of science journalism and I feel like my soul has been severely paper-cut with my own poorly phrased copy.

To be fair, the majority of the responsibility for this probably lies with me. Today's Important Journalistic Lesson was how much writers have to rely on talking to the author of a paper and other experts in the field. Understanding a paper is one thing, but there's no way a normal human-being could absorb enough of the nuances of a subject area in an hour to see where it fits into the bigger picture. What might appear to be a paper about the mating dance of the Irish Pink-spotted squid could be key to the evolution of language to a squid-expert. Or the missing link! Or it could just be a paper about oddly-behaved calamari. Sometimes, the press release gives you a clue to the important take-home message of a paper. Other times, the press release is not entirely accurate. 

This is why science writers call up the author and ask them lots of questions before writing anything. Unfortunately, through a combination of French public holidays and the only author on the paper capable of answering my questions off hiking in the wilderness, I had to make do with a slight language barrier and a giant understanding barrier. And when it came to talking to other experts in the field, my two days of expert-hunting experience failed me utterly and the only person I managed to snare was fairly luke-warm about the paper, which wasn't much help.

Then the deadline caught up with me and it was all 'get it submitted', 'check the facts', 'find related articles for the website', 'work out how to use the complicated submission system', 'panic, panic, panic.' Then it was gone and I was left with a vague feeling of disquiet.

Fast forward a few days and I can see that I could have done a few things better. Such as checking that the copy-editors hadn't removed an integral "-like" from the title. Unfortunately, a few scientists who commented on the post also spotted the flaws. So we had to issue a correction. Then then someone else pointed  out a paper from April that I missed, and we had to correct something else. And I've been in a science-based sulk ever since.

What this did make me realise is that journalists sometimes get an unfairly hard time when science reporting goes a bit wrong. But it's impossible to know everything about a subject and you put a certain degree of trust in the peer review process, the authors accurately representing their work, and the press release not over-selling the importance of the results.

Saturday 18 August 2012

I’m a week into a month-long placement in a science journalism office made up of real journalists and me – a research scientist who is rapidly learning a new respect for those who write about science in a professional capacity. In the past, I know I’ve Googled ‘where do science journalists get their ideas’ and ‘how to write about science’ and ‘what does it mean when your tongue goes green’, and this post touches on at least two of the above. Only from the point of view of someone who isn’t a professional journalist and doesn’t fully know what they are talking about. Tomorrow I will be giving advice on how to do brain surgery.

But first on to the results of my knowledge-leech/journalist-stalking behaviour...

So where do science writers get their ideas?
  • Embargoed papers from the big journals which are available a few days before the papers are published. Science, Nature and PNAS are the only ones I've seen so far and a huge majority of the covered papers seem to originate in these journals. Even then, maybe only one per issue will be interesting enough to cover.
  • Daily press releases from Eurekalert and other sources, which are again journalist only resources (I couldn't even register with Eurekalert because I am a working scientist and therefore deemed unworthy/untrustworthy to access embargoed papers). These lists include press releases for papers and important reports and, from what I've seen, contain a lot of dreck as well as the interesting stuff.
  • Keeping an eye on the news for disease outbreaks, natural disasters, pharma company share prices, takeovers, policy info, funding announcements, politicians saying silly things about science, and many other things I am yet to fully grasp. Everyone seems to have their own area of particular interest.
  • Blogs written by scientists or industry insiders can often turn up mentions of new developments in the field, or point out areas that would be worth thinking about. 
  • Conferences can be a good source of soon to be published work and ideas, although some aren't open to journalists. 
  • Then there are the connections journalists build up with scientists or companies, or pet subjects they've been watching for years writing for the right paper to come along. A few times, I've heard someone mention a scientist emailing them in quite a non-scientisty bout of self-promotion.
  • Finally, there's trawling through next tier down journals for recently released papers that didn't send out press releases and have slipped under the radar. This is harder as generally the really world-changing stuff goes into the super-journals but I did manage to find one really interesting paper and was allowed to write a 120-word summary of it, which was cool.

Wednesday 15 August 2012

For the next month, I am taking a teeny break from science to pretend to be a journalist at Nature News. It's a scheme aimed at teaching working scientists about how the media works by dragging them out of the lab, bleary eyed with the residual smell of growth media lingering upon their person, into the wonderful world of 10am starts and actual, real deadlines.

Three days in, and I have learnt:

1) Science journalists know much more about science than I do. Sure, they couldn't tell you all the three hundred ways there are to accidentally kill a culture, or the gene number of the TB glycerol kinase. But I'm beginning to wonder why, exactly, I've spent so many years filling my own brain up with all this esoteric trivia while neglecting some of the important stuff. Like science policies that directly impact on my work. Or Exciting Stuff happening in fields that are unrelated to my own.

2) Journalists are not the anti-Christ. From how some people in science talk about the media, you'd think everyone who writes about the news sacrifices babies in their spare time and has no regard for things such as factual correctness or the truth. Yeah, Nature is about as sciencey as science journalism can get - its aimed at actual scientists for starters, not those other bipedal furless mammals I am occasionally forced to interact with. But I was still surprised by how much effort goes into fact checking and writing a balanced story. I will, in fact, write an entire post about the creation of an article at some point.

3) Some scientists don't half moan. Getting a quick peak into another industry makes me realise how small and insignificant I am to the world of science as a whole. I think maybe it's easy for scientists to forget how lucky we are when we are constantly surrounded by others who share our worries and fears. Yeah, there are plenty of things in science that could do with being fixed. But whining doesn't help anyone. Fixing them fixes them. Being in a different work place makes it painfully clear that those bitter, complaining scientists who you can find lurking in every lab are not what I want to become </end bitching>

4) So much science is not news. No one wants to read about the latest advance in understanding membrane signalling proteins in Th96 CD61+ T cells, even if the scientist who wrote it is Very Clever and Important. August isn't the greatest month to work in science journalism as there isn't very much going on. I keep trying to find things to write about but very few papers come out that would work as news. 'Surprise, sensationalism and significance' are all required. Makes me realise how insular my own scientific niche is -  even the biggest, most self-important scientists in my field have rarely done anything newsworthy when it comes to their millions of Nature/Science/Cell publications.

5) Phone interviews can be painful. If an author is busy, do you wait patiently for them? Noooo, you phone them again and again, and their co-authors, and anyone else you can think of and PESTER! And, if they don't give you a good enough answer, you keep asking until they tell you to go away. It's like working in a call centre, only without bonuses. This is the part of the placement that I don't think I will quite get used to. That cringing, 'I can't believe I made a stranger hate me in the name of science'-feeling. Urrrgh, scarred for life. But, looking at the positives, I think is will cure me of any residual shyness still lingering from childhood.

6) The novelty of a free canteen runs out very quickly.

Sunday 1 July 2012

Finishing Things

0 comments
So last week I ‘finished’ my latest book. It’s a young adult novel about a girl who can find lost things and the search for a mythical forgotten city with a beating stone heart said to hold the key to immortality. Also, man-eating gargoyles.

I’m not very good at finishing things, so getting to the point when I can bear to stop messing around with a book is a cause for celebration. Oscar Wilde once said: “I was working on the proof of one of my poems all the morning, and took out a comma. In the afternoon I put it back again.”

That’s me—staring at the same page for hours, trying to think up a more evocative way of saying ‘she walked to the door’. And then the dawning realisation that this is the most boring sentence known to mankind (no matter whether she walks, stumbles, or minces) and it needs to die.

At the other end of the spectrum, I see loads of writers who type ‘the end’ and think they’re done. But on top of the four months I spend writing a first draft, I need another three to get my manuscript into a state suitable for inflicting upon others. Sometimes longer (and sometimes an eternity would not be enough. Zombie poodles? What was I thinking?).

One of the most stressful periods of my life was writing up my PhD thesis ridiculously quickly so that I could start a new job (and get paid!). I still cringe when I think of how much better (and shorter) it could have been if I'd been able to go through it a few more times and make some changes. But when my examiners failed to mention what I thought was a declaration of war against the English language, I realised that sometimes my endless faffing actually acheives very little. I suppose there's a balance between getting something right and attempting to polish it to such a high shine that you scrub so hard its bones start to poke through the skin.

So here is how I know when to stop:

Do I have enough distance from the project to see all its faults?
When you’re looking at the same thing day in, day out, it is hard to be objective. But take a break and let the project simmer in its own juices and suddenly all the flaws become all too apparent. Hitting send on that submission the moment you write the last word is not a good idea.

Am I too attached to my precious words to do what’s necessary?
If a character or scene or sentence isn’t adding anything to the plot it needs to go. Yeah, I might think the idea is awesome but others won't be so impressed with my self-indulgence.

Have I read it and re-read it and removed the majority of the errors?
My Achilles heel is typing ‘that’ instead of ‘than’. And how ever many times I read something, I will always manage to find one that I’ve missed. The odd mistake is one thing. But it annoys me no end when I hear fiction writers say something along the lines of ‘I’m rubbish at grammar and can’t be bothered to learn, but that’s an editor’s job anyway’. Great way to make a professional first impression.

Am I happy with it?
Chances are, if an agent or publisher takes it on they are going to request tonnes of revisions but that's not a good excuse for not making something the best you can. Competition to be published is huge and submitting something with obvious flaws or plot holes is a terrible idea (and I say this from past experience. Oh the shame).

Once I am satisfied with these questions, it is time to let go and move on to something new! Hopefully that something new will include finding time to post science-related articles on this blog...

Sunday 27 May 2012


It makes me laugh when I hear people say that they don’t like fantasy or science fiction novels because ‘it’s not real’. All fiction, by definition, is made up. Yet, when it comes to imaginary monsters or aliens or magicians with pointy hats and white beards, many people don’t want to read something so removed from reality. The reason I have a problem with this isn’t that some people don’t want to read the sort of book that I happen to write. These differences in taste are what make the world interesting. But their reasoning does bother me. And this is because all book are about people. In 1984, Orwell made his people into pigs to show the dangers of totalitarianism; Philip K. Dick used androids to make us think about what makes us human in Do Androids Dream of Electric Sheep?; Tolkien had hobbits and elves and wizards, but The Lord of the Rings was about the power of temptation, and humanity’s relationship with death. 

There are very few new ideas in the world, but there are a million ways of saying them. The eternal question of what makes us human doesn’t change if you happen to dress it up with the odd dragon. Love still conquers all if it is up against fairies and talking trees. I personally read books in the hope that they will teach me something new, or make me think about something in a different way. And all  it takes is someone to package those ideas up in a way that resonates with me. The plot might be a post-apocalyptic fight for survival, but the message is on the futility of war, or the strength of the human spirit, or maybe even the meaning of life.

One of the hardest things to learn as a writer is your ‘voice’. It can be a richness of prose like Dickens, or the inclusion of certain recognisable elements like Roald Dahl or quirks of language like Shakespeare. Or it can be a unique approach to the rules of grammar reminiscent of Cormac McCarthy. While few think about it, your real-life voice is just as unique and just as capable of boring or intriguing or exciting those who listen to us. There are people who we enjoy talking to, and there are those who change us—they find a way to say something that gets under our skin and makes us rethink our opinions. The same goes for books. On his literary inspirations, Martin Amis said: “I find another thing about getting older is that your library gets not bigger but smaller, that you return to the key writers who seem to speak to you with a special intimacy. Others you admire or are bored by, but these writers seem to awaken something in you.”

There’s a form of magic to finding a way to say something in a way that sneaks into the head and heart of a reader, and plants the seed that will grow into a new way of thinking about something. But that’s the important part—to only sow the seed instead of trying to ram a fully grown tree down someone’s throat. It’s a sneaky kind of persuasion—tricking someone into coming up with the very idea that you wanted them to have without even noticing you, the writer, quietly whispering in their ear. However, try too hard and a writer’s voice becomes a boastful five-year-old screaming ‘look at me, look at me.’ And it’s something many novice writers struggle with. The fine line between cultivating a unique voice and ensuring that this voice is unobtrusive enough that the reader doesn’t feel clubbed into submission.

There’s a saying among writers: ‘Murder your darlings.’ British writer Arthur Quiller-Couch is quoted as saying, "Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—wholeheartedly—and delete it before sending your manuscript to press." It all comes back to the need to control that writer’s voice to a point where it flows over a reader instead of drowning them in a sea of flowery language and esoteric wit. But it’s a paradox for beginning writers who, at the same time as needing to develop their unique voice, also need to know when to rein it in.

My stomach always lurches when I hear writers utter the words, ‘my book is my baby’. It’s a commonly known fact that people tend to lose their objectivity when it comes to their own kids. Revising a novel from first draft terribleness to something others might actually want to read involves listening to criticism, and then ripping all your precious words apart and sticking them back together again. Few people are willing to dismember their ‘babies’ to create Frankenstein’s monster. But the words on the page are not the story, they are merely a vessel to sneak your ideas into someone else’s mind. Words are the tools, but the art is the ideas they conjure in the reader’s head.

The same goes for any career other than writing, science included. Presenting a graph of your data isn’t enough if you can’t find a way to package it as something others want to read. I’m not talking about clever turns of phrase or poetic descriptions, which have no place in scientific writing. The words don’t need to be beautiful to do their job, but it’s a mistake to think they don’t matter. They need to gently prod the reader in the right direction by highlighting the important parts, allowing fellow scientists to reach the same conclusions that the author did, only in the space of thirty minutes rather than ten years. You can’t shout your side of the argument and expect others to give in—a fault I do sometimes see with scientists’ attempts to deal with certain controversial subjects such as cloning. You have to say it in a way that makes someone listen, and then makes them think.

That’s the secret behind any good writing—whether it’s designed to entertain or educate, whether it’s about bacteria or dragons. In the end, everything is about people and how we fit into the world around us.

Saturday 14 April 2012

African sleeping sickness is one of those scary diseases that seems kind of alien to anyone living in the Western world but which is a real threat to those living in sub-Sahara Africa, causing around 50,000 cases each year. The disease gets its name from the most recognisable symptom—a disruption of sleeping patterns after the parasite infects the brain. A recent paper published in PLoS One shed some light on how the parasite makes the treacherous journey from the blood to the brain. But why does a parasite spread by infected blood want to get into our head to start with?
The three forms of trypansomes - slender, intermediate and stumpy.
Any infectious agent needs to have a plan of attack for dealing with the host’s immune system. Some microorganisms go along the route of actively switching off the immune response. Others hide from the immune cells that would otherwise kill them. The trypanosomes responsible for sleeping sickness use a less subtle but highly effective method to stay one step ahead of the immune system while circulating in the blood. The parasites are coated with ten million copies of the same protein which is recognised by the host, allowing the immune system to start clearing the infection. But, just as the host starts to get the upper hand, the parasite subtly changes this protein disguise so that they are no longer recognised by the immune system.


Without drugs, it is impossible for an infected person to deal with the infection and the disease is always fatal. But sleeping sickness has a fairly high rate of relapse even after treatment. One of the reasons for this could be that, at some point the parasite decides to make the trip from the blood and into the brain. Here, it is effectively protected from drug treatment, and can pass back into the blood system to continue the infection. An evolutionary explanation for this could be that some hosts are better at dealing with the infection than humans, and the brain represents a hiding place from the immune system.

Tsetse fly. Yuk.
Image from Wikipedia
This late stage of the disease—the brain stage—is not well understood. It takes weeks and months for the late symptoms including confusion, reduced coordination, daytime sleepiness, and insomnia at nigh to emerge, and the reasons for this remain elusive. One of the most interesting of these symptoms—the change in sleeping patterns—has an interesting explanation. Sleeping sickness is spread by the tsetse fly. The tsetse fly is one of the less pleasant creatures in the world and it has fairly disgusting table manners. It bites a hole in the skin, vomits up some of its last meal complete with any parasites along with agents to prevent the blood from clotting, and then feasts on the resulting blood pool. This isn’t particularly pleasant for the unfortunate owner of the blood. Therefore it helps if the meal happens to be asleep at the time of being fed on.  

But how does the trypanosome succeed in altering a person’s sleeping patterns? It appears that this is a side effect of a signalling molecule used by trypanosomes to control cell density. When the parasite gets into the brain, it doesn’t want to cause extensive inflammation and get itself noticed. So it secretes a messaging molecule called PGD2 that tells neighbouring parasites to commit parasite-suicide for the good of the overall population. But PGD2 has also been shown to cause non-REM sleep when injected into the nervous system. So secreting PGD2 directly into the brain is useful to the parasite when a person is far more likely to be bitten by the tsetse fly if they fall asleep during the day.


The sleeping sickness parasite makes
its way to reside between the Pia mater
 and Glia limitans at the edge of the blood.
Image from: Wikipedia
So how does the parasite get into the brain in the first place? Our brains are cut off from our blood supply by the blood brain barrier—a barrier which actively prevents such things as parasites from making the trip out of our veins and into our central nervous system. In addition to the blood brain barrier, we also have a barrier between our blood and the colourless liquid in which our brains float, and it is across this barrier that the parasites make the journey into the brain. Hartwig Wolburg and coworkers demonstrated that this journey takes the parasite through hostile territory until it reaches it’s a position at the edge of the brain where it is protected from the immune system but can still reinvade the blood if it so chooses.

But the group responsible for this work also addressed the question of why the brain stage takes so long to emerge. Something interesting about their attempts to reproduce the brain infection in rats was that it proved impossible to simply inject parasites into the nervous system. Instead, the infection needed to take its usual course, beginning with the blood stage and progressed to the brain stage after some time. It appears that there are three forms of the parasite (shown in the figure at the top of the post)—a stumpy form which does not undergo the variation in its coat proteins and is killed by the immune system, an intermediate form which is responsible for the blood infection, and a slender form which can cross into the brain. How this slender form emerges and whether it really is required for brain infection remains to be determined, however.

Research such as this has the potential to help the development of future vaccines and drugs by teaching us more about how the infection progresses. The current treatment for the later brain stage of the disease involves an arsenic-derivative which kills one in twenty people and has been described as ‘fire in the veins’ by those unlucky enough to need to take it. Over the past few years, sleeping sickness has slowly been decreasing in numbers and it is hoped that in a decade this disease may finally be eliminated.

Tuesday 3 April 2012


At this time of year, the Kruger National Park in South Africa reaches temperatures of up to 38 degrees Celsius. This has nothing to do with the subject of this post, but I thought I would use it to illustrate one of my newly recognised great discoveries of the 20th century—in-car air-conditioning. It’s a pretty tenuous link to what I really want to talk about, but the invention of modern air-conditioning occurred in 1902 in Buffalo, New York (thanks to a guy called Willis Haviland Carrier) and it so happens that, while in the Kruger Park, our awesomely air-conditioned 4x4 was charged by a slightly over-exuberant buffalo. Close-up, buffalo are kind of scary.

So a safari in the Kruger tends to involve a lot of driving around, peering through binoculars at what could be an animal but, more often than not, proves to be a large rock. Along the way, it is possible to drop at off a number of rest stops for a deeply unpleasant burger and to look at maps of the park on which other visitors have stuck magnetic stickers indicating the positions in which various animals have been spotted. But it turns out that all the stickers for rhinos have been removed and replaced with a little sign saying that, for conservation reasons, the sightings of rhinos are no longer reported. This kind of sucks.

The Kruger National Park is home to around 10,000 white rhinos (see photo above taken from the confort of our air-conditioned 4x4) as well as about 500 black rhinos—a critically endangered species of which there are estimated to be around 3,500 left in the wild. And the main reason rhinos are so endangered? Because people keep poaching them for their horns, which are a key ingredient in Chinese herbal medicine. Just last year, something like 250 rhino were killed in the Kruger National Park. And, at the same time that we were in South Africa, there was a candlelit demonstration outside the Chinese embassy by a group asking the Chinese government to condemn the use of rhino horn in traditional medicine in an effort to stop this barbaric slaughter of animals. 

Now I’m all for preserving cultural traditions, but seriously? Since when is there any scientific evidence that powdered rhino horn is any use in treating fevers and convulsions, conditions for which it is prescribed? And do you know how your ailments are often diagnosed by a Chinese Medicine practitioner? By looking at your tongue. Yes, my tongue is one of my favourite bodily appendages, as evidenced by the multitude of delicious foods I partook in while visiting South Africa (buffalo included). But do I believe that it is some kind of medical window into my health status? Um, no. Based on self-inflicted internet-based Chinese tongue diagnosis, I currently have a Yin deficiency (which can be treated with rhino horn) and something known as ‘Damp heat’. I don’t know what that is, but it sounds troubling.   

Something that infuriates me is the on-going trendiness in the Western world for embracing herbal remedies. Here’s the thing—if such remedies were scientifically proven to work, they would be what us scientists like to call ‘prescription drugs’. Otherwise, they are concoctions of herbs and who-knows-what-else mixed up based on unsupported assumptions about how the body and disease works. I couldn’t find any clinical trial which conclusively demonstrated that Chinese herbal therapy has any positive effect. Something there is proof of, though? Arsenic, lead and mercury poisoning as a result of these herbal remedies. Just because it is 'natural' doesn’t mean it isn’t going to kill you.

That’s not to say that some Chinese herbal medicines don’t work. The thing is, how are we meant to know if it remains unregulated and untested? All I know is I am not putting anything in my mouth that has been prescribed as a result of some doctor poking my tongue and requires a rare animal such as the black rhino to be poached to the brink of extinction to provide supposed ‘medicines’.

Sunday 26 February 2012


Yersinia pestis holds the dubious title of the world's most devastating bacterial pathogen. While its glory days of the Black Death are thankfully a thing of the past, this pathogen remains a threat to human health to this day. A recent paper published in PNAS describes how the bacterium switches off the immune system in the lungs, going some way to explain why the pneumonic form of the Black Death is almost always fatal if untreated.

During the Middle Ages, the plague, or Black Death—so called because of the blackening of its victims' skin and blood—killed approximately a hundred million people across the world. In Europe, in particular, between thirty and sixty-percent of the population is believed to have perished. Although we now know that the bacterium responsible was transmitted by rat fleas, Europe in the Middle Ages was not known for having a sound grasp of science. Theories to explain the cause of the Black Death included a punishment from God, alignment of the planets, deliberate poisoning by other religions, or ‘bad air'. This final theory persisted for some time leading seventeenth-century doctors to don a bird-like mask filled with strong-smelling substances, such as herbs, vinegar or dried flowers, to keep away bad smells and, therefore, the plague.

While today we can cure the plague with antibiotics, historical treatments were as unreliable as the Middle Age's understanding of the disease. The characteristic swellings of a victim's lymph nodes were often treated by blood-letting and the application of butter, onion and garlic poultices. But such remedies did little to improve a victim's chances (even if it did make them smell delicious)—mortality rates varied between sixty and one-hundred percent depending on the form of the disease afflicting the patient. This led to the desperate population attempting far more extreme measures, such as medicines based on nothing but superstition including dried toad, or self-flagellation to calm their clearly angry gods.

The three predominant forms of the disease were described by a French musician named Louis Heyligen (who died of the plague in 1348):

"In the first people suffer an infection of the lungs, which leads to breathing difficulties. Whoever has this corruption or contamination to any extent cannot escape but will die within two days. Another form...in which boils erupt under the armpits,...a third form in which people of both sexes are attacked in the groin."

So anything involving the words "attacked in the groin" is clearly a bad thing. But these three forms of the plague come in different flavours of "bad". Of the three, bubonic plague with its unpleasant boils and swellings is the least fatal, killing around two-thirds of those infected. Whereas bubonic plague spreads throughout an infected person’s lymphatic system, septicaemic plague is an infection of the blood-system and is almost always fatal. The final form, the rarer pneumonic plague, also has a near one-hundred percent mortality rate and involves infection of the lungs, often occurring secondary to bubonic plague and capable of being spread from person-to-person.

One of the most interesting aspects of pneumonic plague is that the first 36 hours of infection involve rapid multiplication of the bacteria in the lungs but no immune response from the host. It is as if the immune system simply doesn’t notice the infection until it is too late to do anything about it. This ability to replicate completely beneath the immune system’s radar makes Y. pestis unique among other bacterial pathogens and a group from the University of North Carolina recently attempted to shed some more light on how Y. pestis achieves this feat, publishing their findings in PNAS.

So is Y. pestis's success down to a) an ability to hide from the immune system, or b) a deliberate suppression of the normal host response to a bacterial infection? To answer this question, the scientists coinfected mice with two strains of Y. pestis—one capable of causing plague in mice and one which is usually recognised and cleared by the immune system. If the bacteria are capable of modifying the conditions in the lung for their own benefit, it should be possible for a non-pathogenic mutant of Y. pestis to survive when co-infected with a virulent strain.




And this is exactly what the scientists found. In the above image, the green bacteria would normally be cleared by the immune system but, in the presence of the pathogenic red strain, they are able to survive. This suggests that the pathogenic Y. pestis is actively switching off the immune system, establishing a unique protective environment that allows even non-pathogenic organisms to prosper. The authors went on to show that this effect isn't limited to strains of plague—other species of bacteria not usually able to colonise the lung can also replicate unperturbed when present as a co-infection with Y. pestis.

Part of this immunosuppressive role is carried out by effectors injected into the host cell by a type III secretion system—a kind of bacterial hypodermic needle. But this isn’t the only mechanism involved and, unfortunately, determining exactly how Y. pestis establishes the permissive environment is proving difficult. The authors of the PNAS paper attempted to use a commonly used method to investigate which Y. pestis genes are vital for an infection to progress. TraSH screening is a really clever method which involves infecting an animal model with large pools of gene mutants and determining which mutants are lost over the time-course of the infection. In other bacterial species, it is every bacterium for itself and mutants with a defect in virulence fail to survive in the animal model, giving an insight into which genes are vital for infection. But this does not work well for Y. pestis due to the ability of virulent mutants to permit the growth of impaired mutants that, alone, would be unable to cause disease.


Screening for genes involved in infection - an animal model is infected with a pool of single mutants. Those mutants lost during infection are identified and the mutated gene used to learn more about what is required for an infection. This method does not work well with Y. pestis as the attenuated mutants can survive in the permissive lung environment created by the other mutants despite not being able to create this environment on its own.






Part of the modern-day interest in pneumonic plague is, unfortunately, the result of a human rather than a natural threat—bioterrorism. The Black Death bacterium has an unpleasant history of use as a weapon. As far back as 1346, the Tartars catapulted plague-ridden corpses over the city walls of their enemies and, unfortunately, as technology and science advanced, so did our abilities to use deadly-diseases against our enemies. During World War II, the Japanese dropped bombs containing plague-infected fleas on Chinese cities, and the Cold War saw both America and the USSR develop aerosolised Y. pestis. One of today’s concerns is that we don’t know what happened to all the weapons research carried out in the USSR, meaning that weaponised, antibiotic-resistant Y. pestis must be considered a potential bioterror threat. So understanding how the plague bacterium causes disease in humans is vital for the future development of new treatments and vaccines. And it is also a really interesting pathogen due to its unique way of ensuring it survives long enough in the host to be transmitted to other unfortunate victims.

Tuesday 21 February 2012

There’s a lot in the news at the moment about a little boy who has been diagnosed with Gender Identity Disorder and is now living as a girl. I can’t quite decide how I feel about this. Part of me thinks it is awesome that his parents and teachers are being so supportive—god knows we could do with a bit more understanding when it comes to adults who identify with the opposite gender to the one their chromosomes dictate. But there’s another part of me that is: a) hugely disturbed about what the parents’ motives are in plastering this five-year-old all over the newspapers and internet, and b) worried that too much emphasis is put on a person being either ‘male’ or ‘female’, especially at such a young age.

Despite what certain media reports might tell you, there is no such thing as a ‘male brain’ or a ‘female brain’. The truth is, no one really knows how our minds decide to associate with one gender or the other—is it physical, or chemical, or psychological, or a mixture of all three? Our entire personality certainly isn’t a product of our genes, so why are we so fixated with this idea that we are born a certain, fixed way when it comes to gender identity? Most people would be furious to be told that their upbringing and experiences have had no effect on their personalities—of course we don’t arrive on Earth with all our views and personality quirks preformed. Yet, when it comes to complicated and controversial topics such as gender identity, many seem determined to relinquish all control over something so integral to who we are as a person. Of course there might be a biological or chemical cause(s) for Gender Identity Disorder–but can we honestly say cultural gender definitions play no role? 

I think my big problem comes down to society’s definitions of what makes a girl and what makes a boy, as if the two are set in stone. You don’t like playing with dolls? Yeah, you’re male. You like talking to people and are great at empathy? Ohhh, such a girl. It’s ridiculous. Especially when there is no evidence that traits such as these are intrinsically ‘male’ or ‘female’. Whenever there is a perfectly reasonable scientific study into the physical characteristics of the brains of men and women (some brain disorders have much higher rates in a particular sex, meaning we can’t ignore these differences), certain non-scientists insist on using the data to make sweeping generalisations about the sexes that reinforce stereotypes and are simply not backed up by the science. In reality, many of these supposed scientifically–supported gender differences are completely mythical.

Let’s start with the old favourite ‘brains develop differently in girls and boys’. A school in Florida is not unique in its support of single sex schooling, and backed up their policy with:

‘‘In girls, the language areas of the brain develop before the areas used for spatial relations and for geometry. In boys, it’s the other way around.’’ and ‘‘In girls, emotion is processed in the same area of the brain that processes language. So, it’s easy for most girls to talk about their emotions. In boys, the brain regions involved in talking are separate from the regions involved in feeling.’’

Is there any real scientific evidence for this? Nope. Turns out the early studies that led to this hypothesis have not been backed up by more detailed analyses. Yet so many people persist with the idea that ‘boys are better at maths, girls are better at emotions’ as if it is a known fact—and this ‘fact’ has made it’s way into policies that effect how kids are educated! And all that ‘girls develop faster than boys’? Yeah, that’s not backed up by the evidence either. Despite widespread beliefs, neuroscientists do not know of any distinct ‘male’ or ‘female’ circuits that can explain differences in behaviour between the sexes.

So basically studies into brain structure have yet to identify any specific difference between the brains of the two sexes that leads to a specific difference in behaviour. Yet boys and girls do behave differently if we take an average over an entire population. (And, yes, I realise averages are rubbish when it comes to making judgements on an individual level). Let’s use one of the most obvious and earliest differences as an example—appreciation of the colour pink. Was I to stick all Britain’s little girls into one blender and all the boys into another, the former mixture would average out at a pink colour with a sprinkling of hearts and ponies, and the latter would be camouflage with a shot of train fuel and maybe a gun poking out the top.

If there is no proof for the existence of a defined, biologically male or female brain at birth, how do we explain the differing colours of our average-child-smoothies? There's always the issue of what hormones we are exposed to in the womb or after birth, but could it also be that sex differences are shaped by our gender-differentiated experiences? Perhaps small differences in preferences become amplified over time as society, either deliberately or not, reinforces traditional gender stereotypes (Yay, my little boy kicked a ball—sports, sports, sports! Oh, he tried on my high heels? Yeah, let’s just ignore that). How much of our gender identity is truly hardwired into our brains from birth and how much is culturally created?

This is why I have a problem with the little boy diagnosed as ‘a girl trapped in a boy’s body’ that I mentioned at the start of this rambling monologue. By trying their best to define him as a ‘girl’ rather than as an individual, the parents and school are doing the exact same thing that they were trying to avoid—attempting to fit him into a gender-shaped box which, in reality, few people truly belong in. In the end, my own opinion does come down on the side of those trying to support this child (but not with the asshats using her to make money), but I am concerned that they are swapping one rigid set of gender rules for another. There's a lot more to being a woman than occasionally wanting to be a princess and surely a five-year-old has a long way to go before they can be accurately pigeon-holed, if at all.

In my perfect world, children would be allowed to experiment without anyone making any judgements or diagnoses (why do we need a medical term to make it acceptable for a small child to play around with wearing a dress, or growing their hair long?). That way, when they were mature enough, they would be free to make a balanced and personal decision on who they want to be and how they can best fit in with the rest of the world, including with our culturally defined ideals of gender.

Understanding how differences between the sexes emerge has the potential to tell us so much about the nature-nurture interaction, and could help us understand why some people associate so strongly with the opposite sex. But, unfortunately, it is open to careless interpretation by the media and public, who seem determined to use it to reinforce the gap between men and women rather than to tell us more about what shapes each of us a person. 

Further reading:
This is a really interesting article on neurological sex differences pulished in Cell by the author of Pink brain, blue brain: How Small Differences Grow into Troublesome Gaps – and What We Can Do About It, and some feminist perspectives on Sex and gender and trans issues.

Monday 20 February 2012


I have a slight obsession with the sewers, which I don’t think is entirely normal or healthy. It’s the architecture more than the sewage itself but, as it happens, this post concerns the latter. Our tour of interesting things poo-related starts in London of 1858 and a period of history known as the Great Stink.

The first half of the 19th century saw the population of London soar to 2.5 million and that is a whole lot of sewage—something like 50 tonnes a day. It is estimated that before the Great Stink, there were around 200,000 cesspools distributed across London. Because it cost money to empty a cesspit, they would often overflow—cellars were flooded with sewage and, on more than one occasion, people are reported to have fallen through rotten floorboards and to have drowned in the cesspits beneath.

Sewage from the overflowing cesspits merged with factory and slaughterhouse waste, before ending up in the River Thames. By 1958, the Thames was overflowing with sewage and a particularly warm summer didn't help matters by encouraging the growth of bacteria. The resulting smell is hard to imagine, but it would have been particularly rich in rotten egg flavoured hydrogen sulphide and apparently got so bad that the House of Commons resorted to draping curtains soaked in chloride of lime in an attempt to block out the stench and even considered evacuating to a location outside the city.

At the same time, London was suffering from widespread outbreaks of cholera; a disease characterised by watery diarrhea, vomiting and, back in the 19th century, rapid death. But no one really knew where cholera came from. The most widely accepted theory was that it was spread by air-borne ‘miasma’, or ‘bad air’. Florence Nightingale was a proponent of this theory and worked hard to endure hospitals were kept fresh-smelling and that nurses would ‘keep the air [the patient] breathes as pure as the external air’. However, when it came to cholera, this theory was completely wrong.

A doctor called John Snow was one of the first people to suggest that the disease was transmitted by sewage-contaminated water—something of which there was a lot in 19th century London. Supporting his hypothesis was the 1854 cholera outbreak in Soho. During the first few days, 127 people on or near Broad Street died and, by the time the outbreak came to an end, the death toll was at 616 people. Dr Snow managed to identify the source as the public water pump on Broad Street and he convinced the council to remove the pump handle to stop any further infections (although it is thought the outbreak was already diminishing all by itself by this point).

From a 19th Century journalist on the problem of cholera in London:
A fatal case of cholera occurred at the end of 1852 in Ashby-street, close to the "Paradise" of King's-cross - a street without any drainage, and full of cesspools. This death took place in the back parlour on the ground floor abutting on the yard containing a foul cesspool and untrapped drain, and where the broken pavement, when pressed with the foot, yielded a black, pitchy, half liquid matter in all directions. The inhabitants, although Irish, agreed to attend to all advice given to them as far as they were able, and a coffin was offered to them by the parish. They said that they would like to wait until the next morning (it was on Thursday evening that the woman died), as the son was anxious, if he could raise the money, to bury his mother himself; but they agreed, contrary to their custom on such [-55-] occasions, to lock up the corpse at twelve o'clock at night, and allow no one to be in the room. On Friday, the day after death, the woman was buried, and so far it was creditable to these poor people, since they gave up their own desires and customs, which bade them retain the body.

George Godwin, 1854 - Chapter 9, via http://www.victorianlondon.org/index-2012.htm

The London sewage problem was finally addressed by the introduction of an extensive sewage system overseen by the engineer Joseph Bazalgette. In total, his team built 82 miles of underground sewers and 1,100 miles of street sewers at a cost of £4.2 million and taking nearly 10 years to complete.

London sewer system opening - via bbc

We now know that cholera is caused by a bacterium called Vibrio cholerae. In order to become pathogenic to humans, the originally environmental bacterium needs to acquire two bacteriophages (viruses that integrate into the bacterium’s genome)—one that provides the bacterium with the ability to attach to the host’s intestinal cells and one that leads to secretion of a toxin that results in the severe diarrhea associated with this disease.

Now I don’t often get teary-eyed at scientific meetings but, several years ago, a lecture by a guy called Richard Cash made me remember why I’d got into science in the first place. See, cholera is a disease which kills around 50-60% of those infected (sometimes within hours of the first symptoms) but with treatment, the mortality rate drops to less than 1%. And the reason that this disease is now almost completely curable is down to Professor Cash. The problem with cholera is that a patient can lose something like 20-30 litres of fluid a day and death occurs due to dehydration. So Cash and his team came up with an unbelievably simple solution—replace the patient’s fluid and electrolytes as quickly as they are lost. Oral rehydration therapy is a solution of salts and sugars, and is thought to have saved something like 60 million lives since its introduction. Patients who would have died within hours can now make a recovery within a day or two. Awesome, right?


Today, we tend to hear of cholera mainly when it is associated with natural disasters where contaminated water can spread disease throughout a region where the infrastructure has been severely compromised. One of the most recent outbreaks occurred nearly a year after the Haiti earthquake—cholera left over 6,00 dead and caused nearly 350,000 cases. But, prior to the outbreak, Haiti had been cholera-free for half a century. So where did it come from?


Image available from Wikipedia commons

I mentioned earlier that cholera can result from an environmental strain of bacteria acquiring the phages encoding virulence factors. But, unfortunately, the Haiti outbreak was actually brought into the country by the people trying to help rebuild following the earthquake. By comparing the DNA sequence of the outbreak strain with strains known to infect other parts of the world, it was possible to narrow down the source of the outbreak to Nepal. And UN peacekeepers from Nepal were known to be based near the river responsible for the first cases. It is highly likely that it was one of these soldiers who brought the disease to Haiti and this case demonstrates how quickly cholera can spread if gets into the water system. Lessons learnt from this outbreak will hopefully lead to visitors from cholera-endemic countries being vaccinated before travelling to post-disaster areas, even if they are showing no sign of the disease. After all, something close to 3 in 100 patients remain asymptomatic after infection.

The biggest obstacle in the way of eradicating cholera today is poor sanitation leading to contamination of drinking water. In some parts of the world, the link between hygiene and disease prevention is not as obvious as it is to us in the Western world. Cholera isn’t a disease which requires complicated drugs or vaccines to prevent—washing hands with soap, avoiding contact with human waste, and clean drinking water would make all the difference. 

Friday 17 February 2012

I went to a birthday gathering in a pub the other day to which someone had brought along the game Jenga. Putting aside any conclusions you may want to make as to just how exciting it must be to party with my friends and me, the game actually illustrates an interesting point about evolution. Sort of. 

The idea of Jenga is that you stack up these little sticks of wood and, taking turns, pull out the pieces one at a time in the hope that you won’t collapse the entire tower. If you’re very careful (and haven’t had more than one pint), it is possible to strip down the tower to the bare minimum of pieces that are required to keep it upright. But pick one of the essential load-bearing pieces and the whole thing comes crashing down on top of everyone’s drinks.

And, in a way, evolution is playing Jenga with our genes.

Jenga - image from Wikipedia Commons


You’d think that, after millions of years, our genomes would be stripped-down, streamlined collections of only the DNA we require to be us; nothing more, nothing less. This hypothesis is backed up by the fact that almost all the genes in eukaryotic genomes are conserved—this means that they are found across many species and have persisted in the population for far longer than you’d expect if they weren’t absolutely necessary for survival. The loss of non-essential genes can actually be seen in many parasitic species. The leprosy bacterium, for example, is a much reduced version of the microbe which causes tuberculosis. It has lost around half of its genes because it doesn’t need them anymore.

But here's the problem: scientists have known for ages that it is possible to delete many of the genes found in eukaryotic organisms with no noticeable effect. So a group at the University of Toronto decided to address the question of whether the C. elegans worm really needs all its genes, and their work was recently published in Cell.

C. elegans - Image is from Wikipedia Commons.
The method used by this group was especially clever because, instead of deleting single genes and looking at whether the worm survives, they tested the effect of gene loss over several generations and in competition with other worms. After all, this is what happens during evolution—survival of the fittest and all that. The basic method showcased in this paper used something known as RNA interference to knock-down the effects of a certain gene (RNA interference literally interferes with the synthesis of a protein by sequestering away the mRNA recipe before it can give the cell any instructions).

The scientists mixed those worms in which a gene had been knocked-down with the original worms. If the gene being tested proves to be vital, the knocked-down worms will be lost over successive generations due to competition with the original, fitter worms. And, fitting with the idea that we (and by ‘we’ I am referring to all eukaryotes including worms; some people are more worm-like than others, though) only have the genes we need to survive, nearly all the genes in C. elegans were found to impact fitness when knocked down.

This is not what was suggested from all the experiments in which it was found that single genes could be deleted without any obvious effect on the organism. The explanation is probably that different genes play a role under different conditions. This would mean that it might be possible for one gene to be deleted in the laboratory but, were the mutant to be let out into the big wide world, with all its various stresses and challenges, it would be seriously impaired in its survival.

Interestingly, many more genes are found to be essential when this method is used in C. elegans than are identified by similar experiments in yeast. The authors of this paper suggest that this is down to selective pressures being very different for single and multi-cellular organisms. Whereas something like yeast only has to deal with one environmental condition at one time, a multi-cellular organism is forced to juggle the needs of lots of different cell types which are all under different pressures of their own. A multi-cellular creature is far more complex than a unicellular organism and the genes required are therefore more finely tuned. A little like playing Jenga on not just a tower but an entire city and…OK, the analogy is collapsing all around me so I am going to give up and have a drink instead.

Sunday 12 February 2012


Like animals, plants can be infected by a range of pathogenic organisms. And, like animals, plants possess an immune system to fight off attacks from pathogens. The plant immune system is analogous to the innate immune system in higher eukaryotes but does not involve mobile immune cells such as macrophages. Instead, it is every cell for itself when dealing with a potential infection.

The plant innate immune system recognises molecules common to groups of infecting microorganisms known as microbe-associated molecular patterns (of MAMPs). When surface receptors bind these MAMPs, the immune response responds in a non-specific manner—for example, by inducing production of antimicrobial agents that can protect other parts of the plant, or by initiating cell death in order to prevent spread of an infection.  

Successful pathogens, however, have ways to get around the initial immune response. By injecting effector molecules into the plant cell, they are able to interfere with the cell’s ability to mount an effective response. This has led to the evolution of a second branch of the innate immune system in plants which recognises a pathogen’s effector molecules once they get inside the cell, or responds to the downstream effects of these effectors on the plant cell.
The two branches of the plant innate immune response to a pathogen.

The innate immune system is on the front-line in a plant’s battle against infection, so it needs to be extremely good at recognising invading pathogens. Despite the importance of the innate immune system’s ability to recognise threats, little is known about the range and diversity of the MAMPs capable of triggering the immune response. 

So what makes a good MAMP? Because of the non-specific nature of the innate immune response, it is impossible for the receptors on a plant cell to recognise every protein found in every pathogen. Therefore, the immune system focuses only on those proteins which are commonly found in a range of infectious organisms—these tend to be important proteins with a vital function across many species. But a pathogen has its own ways to avoid being recognised and subsequently killed. One method is to vary those proteins that are recognised by the host’s immune system so that they are no longer detected. For this reason, proteins are under strong positive selective pressure to diversify—natural selection will lead to the evolution of pathogens possessing mutated proteins that are no longer recognised by the host’s immune system. However, the more important a protein, the less likely that a random mutation will be tolerated. So vital proteins are also under strong negative selective pressure to maintain their function.

This paradoxical situation results in different regions of an immune system-recognised protein being under either positive or negative selective pressure depending on whether a mutation in that region disrupts host recognition or destroys protein function. By identifying proteins with this particular pattern of positive and negative selection, a group at the University of Toronto searched the genomes of a number of plant pathogens for potential elicitors of innate immunity and their work was recently published in PNAS.


After screening the genomes for potential immune response elicitors, they synthesised the corresponding peptides and inoculated them into A. thanliana (a species of cress commonly studied by plant biologists). These plants were then challenged with a pathogen to determine if the peptides could suppress virulence, indicating that they had triggered the innate immune response.

In total, the researchers found 55 new peptides capable of switching on the innate immune response. It is hoped that this work will give an insight into how co-evolution of plants and their pathogens has occurred. In addition, understanding how the plant innate immune system works could make it possible to synthesise new antimicrobial agents capable of transiently protecting plants from pathogens, or even to genetically engineer improved plants with better disease resistance.

Wednesday 1 February 2012

To understand why infectious diseases make us ill, it helps to consider disease from the pathogen’s point of view. Bacteria, viruses and parasites did not evolve simply to cause illness and suffering; virulence is simply a by-product of a pathogen’s fight for survival. Because an infectious agent which incapacitates its host before it has had the chance to be transmitted is an evolutionary dead-end, the key factor for survival is in striking the correct balance between transmissibility and virulence. It's a numbers game—a pathogen needs to divide in sufficient numbers to overcome the efforts of the host’s immune system long enough to ensure that it will be transmitted to a new host. But exploit the host too much, and there is the risk that the pathogen will be left homeless.

Pathogens have evolved various solutions to this paradoxical situation. Mycobacterium tuberculosis, the bacterium responsible for TB, has been infecting humans for thousands of years and it has evolved to be extremely good at it. Because this disease first emerged when we lived in isolated communities, M. tuberculosis became adept at asymptomatically infecting as many people as possible for extremely long periods of time, causing active disease in only a proportion of those infected. In this way, M. tuberculosis ensured its prehistoric hosts would survive long enough to encounter other humans to which they could spread the disease.

The waiting game works for pathogens like M. tuberculosis, where close contact between hosts is required for transmission. But a disease such as malaria which is spread by a secondary vector can afford to make the host much sicker and still guarantee the infection can be passed on to others. Diarrheal diseases such as cholera can be similarly highly virulent. In this case, the infection is spread via contaminated water, meaning that the bacterium responsible can be transmitted even when it replicates in the host at such high levels that they rapidly succumb to the infection and die.

Thinking about how pathogens evolve to ensure their own survival led me to this recent paper published in Scientific Reports. This work is interesting in that the authors consider the role of host evolution as well as that of the pathogen in determining disease outcome. In the case of highly virulent infectious agents, is the rapid death of the host something which might be beneficial to a population on the whole?

The idea behind this hypothesis is that, if a member of the host population dies immediately upon infection, they can protect the rest of the population from secondary infections. The team from the University of Tokyo investigated the infection of Escherichia coli with a bacteriophage lambda. They used a mixed population of E. coli containing an altruistic host, which commits a bacterial version of suicide as soon as it is infected, and a susceptible host, which permits multiplication and transmission of the phage. When these strains were infected in a structured habitat in which contact between hosts was limited to close neighbours, the presence of the altruistic hosts protected the overall population from being overcome by the infection.

The researchers also observed the emergence of phage mutants that could bypass the altruistic host suicide mechanism. By not killing the host, these random mutants ensured that they could be passed on to other bacterial cells and guaranteed their own survival.



Presence of altruistic hosts which commit suicide upon infection protects entire population including susceptible hosts


This work demonstrates that virulence has not evolved as a result of the pathogen alone, but is influenced by the interaction between the host and the pathogen. In a way, this represents an ‘arms race between pathogen infectivity and host resistance.’ The pathogen will favour lower virulence in order to maintain a sustainable symbiosis, while the host population as a whole benefits from high virulence even though individuals die as a result.

Suicidal defence has previously been described in multi-cellular organisms, where infected single cells are rapidly destroyed to prevent spread of the infectious agent throughout the entire organism. Taking these findings and attempting to extrapolate the data to make conclusions about how human evolution has shaped pathogen virulence is perhaps taking things too far. The huge difference in the growth rates of a human and a bacterium means that the majority of the evolutionary contribution to this particular arms race is from the bacterium’s side. However, this kind of study does show that, where the survival of two organisms is so intertwined, we cannot consider one without taking the involvement of the other into account.

Evolution may move too slowly for humans to compete with pathogens in this way, but the environmental changes that we make have a huge impact on the ability of bacteria and viruses to infect us. This has already been observed in the case of cholera. As improvements in sanitation become more widespread, highly virulent strains are disappearing. This is because those strains which incapacitate the host very rapidly can no longer be as easily passed on, meaning that less virulent strains that do not kill so quickly have the advantage.

It is also interesting to think about how our modern way of life can contribute towards creating epidemics. For example, the bird flu threat would not be quite so concerning if it wasn’t for air-travel providing the potential for any emerging epidemic to spread around the entire world. A highly virulent pathogen is likely to be fairly short-lived unless it has a way to spread very rapidly to a large number of hosts. Take Ebola for example—one of the world’s most deadly diseases, yet outbreaks can be confined to relatively small areas and burn out quickly. In this regard, Ebola is actually a fairly unsuccessful pathogen. M tuberculosis, on the other hand, remains a global issue predominantly due to its ability to infect a huge proportion of the population without causing rapid death of the host. In this case, patience can pay off for a pathogen.