Sunday, 1 July 2012

Finishing Things

0 comments
So last week I ‘finished’ my latest book. It’s a young adult novel about a girl who can find lost things and the search for a mythical forgotten city with a beating stone heart said to hold the key to immortality. Also, man-eating gargoyles.

I’m not very good at finishing things, so getting to the point when I can bear to stop messing around with a book is a cause for celebration. Oscar Wilde once said: “I was working on the proof of one of my poems all the morning, and took out a comma. In the afternoon I put it back again.”

That’s me—staring at the same page for hours, trying to think up a more evocative way of saying ‘she walked to the door’. And then the dawning realisation that this is the most boring sentence known to mankind (no matter whether she walks, stumbles, or minces) and it needs to die.

At the other end of the spectrum, I see loads of writers who type ‘the end’ and think they’re done. But on top of the four months I spend writing a first draft, I need another three to get my manuscript into a state suitable for inflicting upon others. Sometimes longer (and sometimes an eternity would not be enough. Zombie poodles? What was I thinking?).

One of the most stressful periods of my life was writing up my PhD thesis ridiculously quickly so that I could start a new job (and get paid!). I still cringe when I think of how much better (and shorter) it could have been if I'd been able to go through it a few more times and make some changes. But when my examiners failed to mention what I thought was a declaration of war against the English language, I realised that sometimes my endless faffing actually acheives very little. I suppose there's a balance between getting something right and attempting to polish it to such a high shine that you scrub so hard its bones start to poke through the skin.

So here is how I know when to stop:

Do I have enough distance from the project to see all its faults?
When you’re looking at the same thing day in, day out, it is hard to be objective. But take a break and let the project simmer in its own juices and suddenly all the flaws become all too apparent. Hitting send on that submission the moment you write the last word is not a good idea.

Am I too attached to my precious words to do what’s necessary?
If a character or scene or sentence isn’t adding anything to the plot it needs to go. Yeah, I might think the idea is awesome but others won't be so impressed with my self-indulgence.

Have I read it and re-read it and removed the majority of the errors?
My Achilles heel is typing ‘that’ instead of ‘than’. And how ever many times I read something, I will always manage to find one that I’ve missed. The odd mistake is one thing. But it annoys me no end when I hear fiction writers say something along the lines of ‘I’m rubbish at grammar and can’t be bothered to learn, but that’s an editor’s job anyway’. Great way to make a professional first impression.

Am I happy with it?
Chances are, if an agent or publisher takes it on they are going to request tonnes of revisions but that's not a good excuse for not making something the best you can. Competition to be published is huge and submitting something with obvious flaws or plot holes is a terrible idea (and I say this from past experience. Oh the shame).

Once I am satisfied with these questions, it is time to let go and move on to something new! Hopefully that something new will include finding time to post science-related articles on this blog...

Sunday, 27 May 2012


It makes me laugh when I hear people say that they don’t like fantasy or science fiction novels because ‘it’s not real’. All fiction, by definition, is made up. Yet, when it comes to imaginary monsters or aliens or magicians with pointy hats and white beards, many people don’t want to read something so removed from reality. The reason I have a problem with this isn’t that some people don’t want to read the sort of book that I happen to write. These differences in taste are what make the world interesting. But their reasoning does bother me. And this is because all book are about people. In 1984, Orwell made his people into pigs to show the dangers of totalitarianism; Philip K. Dick used androids to make us think about what makes us human in Do Androids Dream of Electric Sheep?; Tolkien had hobbits and elves and wizards, but The Lord of the Rings was about the power of temptation, and humanity’s relationship with death. 

There are very few new ideas in the world, but there are a million ways of saying them. The eternal question of what makes us human doesn’t change if you happen to dress it up with the odd dragon. Love still conquers all if it is up against fairies and talking trees. I personally read books in the hope that they will teach me something new, or make me think about something in a different way. And all  it takes is someone to package those ideas up in a way that resonates with me. The plot might be a post-apocalyptic fight for survival, but the message is on the futility of war, or the strength of the human spirit, or maybe even the meaning of life.

One of the hardest things to learn as a writer is your ‘voice’. It can be a richness of prose like Dickens, or the inclusion of certain recognisable elements like Roald Dahl or quirks of language like Shakespeare. Or it can be a unique approach to the rules of grammar reminiscent of Cormac McCarthy. While few think about it, your real-life voice is just as unique and just as capable of boring or intriguing or exciting those who listen to us. There are people who we enjoy talking to, and there are those who change us—they find a way to say something that gets under our skin and makes us rethink our opinions. The same goes for books. On his literary inspirations, Martin Amis said: “I find another thing about getting older is that your library gets not bigger but smaller, that you return to the key writers who seem to speak to you with a special intimacy. Others you admire or are bored by, but these writers seem to awaken something in you.”

There’s a form of magic to finding a way to say something in a way that sneaks into the head and heart of a reader, and plants the seed that will grow into a new way of thinking about something. But that’s the important part—to only sow the seed instead of trying to ram a fully grown tree down someone’s throat. It’s a sneaky kind of persuasion—tricking someone into coming up with the very idea that you wanted them to have without even noticing you, the writer, quietly whispering in their ear. However, try too hard and a writer’s voice becomes a boastful five-year-old screaming ‘look at me, look at me.’ And it’s something many novice writers struggle with. The fine line between cultivating a unique voice and ensuring that this voice is unobtrusive enough that the reader doesn’t feel clubbed into submission.

There’s a saying among writers: ‘Murder your darlings.’ British writer Arthur Quiller-Couch is quoted as saying, "Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—wholeheartedly—and delete it before sending your manuscript to press." It all comes back to the need to control that writer’s voice to a point where it flows over a reader instead of drowning them in a sea of flowery language and esoteric wit. But it’s a paradox for beginning writers who, at the same time as needing to develop their unique voice, also need to know when to rein it in.

My stomach always lurches when I hear writers utter the words, ‘my book is my baby’. It’s a commonly known fact that people tend to lose their objectivity when it comes to their own kids. Revising a novel from first draft terribleness to something others might actually want to read involves listening to criticism, and then ripping all your precious words apart and sticking them back together again. Few people are willing to dismember their ‘babies’ to create Frankenstein’s monster. But the words on the page are not the story, they are merely a vessel to sneak your ideas into someone else’s mind. Words are the tools, but the art is the ideas they conjure in the reader’s head.

The same goes for any career other than writing, science included. Presenting a graph of your data isn’t enough if you can’t find a way to package it as something others want to read. I’m not talking about clever turns of phrase or poetic descriptions, which have no place in scientific writing. The words don’t need to be beautiful to do their job, but it’s a mistake to think they don’t matter. They need to gently prod the reader in the right direction by highlighting the important parts, allowing fellow scientists to reach the same conclusions that the author did, only in the space of thirty minutes rather than ten years. You can’t shout your side of the argument and expect others to give in—a fault I do sometimes see with scientists’ attempts to deal with certain controversial subjects such as cloning. You have to say it in a way that makes someone listen, and then makes them think.

That’s the secret behind any good writing—whether it’s designed to entertain or educate, whether it’s about bacteria or dragons. In the end, everything is about people and how we fit into the world around us.

Saturday, 14 April 2012

African sleeping sickness is one of those scary diseases that seems kind of alien to anyone living in the Western world but which is a real threat to those living in sub-Sahara Africa, causing around 50,000 cases each year. The disease gets its name from the most recognisable symptom—a disruption of sleeping patterns after the parasite infects the brain. A recent paper published in PLoS One shed some light on how the parasite makes the treacherous journey from the blood to the brain. But why does a parasite spread by infected blood want to get into our head to start with?
The three forms of trypansomes - slender, intermediate and stumpy.
Any infectious agent needs to have a plan of attack for dealing with the host’s immune system. Some microorganisms go along the route of actively switching off the immune response. Others hide from the immune cells that would otherwise kill them. The trypanosomes responsible for sleeping sickness use a less subtle but highly effective method to stay one step ahead of the immune system while circulating in the blood. The parasites are coated with ten million copies of the same protein which is recognised by the host, allowing the immune system to start clearing the infection. But, just as the host starts to get the upper hand, the parasite subtly changes this protein disguise so that they are no longer recognised by the immune system.


Without drugs, it is impossible for an infected person to deal with the infection and the disease is always fatal. But sleeping sickness has a fairly high rate of relapse even after treatment. One of the reasons for this could be that, at some point the parasite decides to make the trip from the blood and into the brain. Here, it is effectively protected from drug treatment, and can pass back into the blood system to continue the infection. An evolutionary explanation for this could be that some hosts are better at dealing with the infection than humans, and the brain represents a hiding place from the immune system.

Tsetse fly. Yuk.
Image from Wikipedia
This late stage of the disease—the brain stage—is not well understood. It takes weeks and months for the late symptoms including confusion, reduced coordination, daytime sleepiness, and insomnia at nigh to emerge, and the reasons for this remain elusive. One of the most interesting of these symptoms—the change in sleeping patterns—has an interesting explanation. Sleeping sickness is spread by the tsetse fly. The tsetse fly is one of the less pleasant creatures in the world and it has fairly disgusting table manners. It bites a hole in the skin, vomits up some of its last meal complete with any parasites along with agents to prevent the blood from clotting, and then feasts on the resulting blood pool. This isn’t particularly pleasant for the unfortunate owner of the blood. Therefore it helps if the meal happens to be asleep at the time of being fed on.  

But how does the trypanosome succeed in altering a person’s sleeping patterns? It appears that this is a side effect of a signalling molecule used by trypanosomes to control cell density. When the parasite gets into the brain, it doesn’t want to cause extensive inflammation and get itself noticed. So it secretes a messaging molecule called PGD2 that tells neighbouring parasites to commit parasite-suicide for the good of the overall population. But PGD2 has also been shown to cause non-REM sleep when injected into the nervous system. So secreting PGD2 directly into the brain is useful to the parasite when a person is far more likely to be bitten by the tsetse fly if they fall asleep during the day.


The sleeping sickness parasite makes
its way to reside between the Pia mater
 and Glia limitans at the edge of the blood.
Image from: Wikipedia
So how does the parasite get into the brain in the first place? Our brains are cut off from our blood supply by the blood brain barrier—a barrier which actively prevents such things as parasites from making the trip out of our veins and into our central nervous system. In addition to the blood brain barrier, we also have a barrier between our blood and the colourless liquid in which our brains float, and it is across this barrier that the parasites make the journey into the brain. Hartwig Wolburg and coworkers demonstrated that this journey takes the parasite through hostile territory until it reaches it’s a position at the edge of the brain where it is protected from the immune system but can still reinvade the blood if it so chooses.

But the group responsible for this work also addressed the question of why the brain stage takes so long to emerge. Something interesting about their attempts to reproduce the brain infection in rats was that it proved impossible to simply inject parasites into the nervous system. Instead, the infection needed to take its usual course, beginning with the blood stage and progressed to the brain stage after some time. It appears that there are three forms of the parasite (shown in the figure at the top of the post)—a stumpy form which does not undergo the variation in its coat proteins and is killed by the immune system, an intermediate form which is responsible for the blood infection, and a slender form which can cross into the brain. How this slender form emerges and whether it really is required for brain infection remains to be determined, however.

Research such as this has the potential to help the development of future vaccines and drugs by teaching us more about how the infection progresses. The current treatment for the later brain stage of the disease involves an arsenic-derivative which kills one in twenty people and has been described as ‘fire in the veins’ by those unlucky enough to need to take it. Over the past few years, sleeping sickness has slowly been decreasing in numbers and it is hoped that in a decade this disease may finally be eliminated.

Tuesday, 3 April 2012


At this time of year, the Kruger National Park in South Africa reaches temperatures of up to 38 degrees Celsius. This has nothing to do with the subject of this post, but I thought I would use it to illustrate one of my newly recognised great discoveries of the 20th century—in-car air-conditioning. It’s a pretty tenuous link to what I really want to talk about, but the invention of modern air-conditioning occurred in 1902 in Buffalo, New York (thanks to a guy called Willis Haviland Carrier) and it so happens that, while in the Kruger Park, our awesomely air-conditioned 4x4 was charged by a slightly over-exuberant buffalo. Close-up, buffalo are kind of scary.

So a safari in the Kruger tends to involve a lot of driving around, peering through binoculars at what could be an animal but, more often than not, proves to be a large rock. Along the way, it is possible to drop at off a number of rest stops for a deeply unpleasant burger and to look at maps of the park on which other visitors have stuck magnetic stickers indicating the positions in which various animals have been spotted. But it turns out that all the stickers for rhinos have been removed and replaced with a little sign saying that, for conservation reasons, the sightings of rhinos are no longer reported. This kind of sucks.

The Kruger National Park is home to around 10,000 white rhinos (see photo above taken from the confort of our air-conditioned 4x4) as well as about 500 black rhinos—a critically endangered species of which there are estimated to be around 3,500 left in the wild. And the main reason rhinos are so endangered? Because people keep poaching them for their horns, which are a key ingredient in Chinese herbal medicine. Just last year, something like 250 rhino were killed in the Kruger National Park. And, at the same time that we were in South Africa, there was a candlelit demonstration outside the Chinese embassy by a group asking the Chinese government to condemn the use of rhino horn in traditional medicine in an effort to stop this barbaric slaughter of animals. 

Now I’m all for preserving cultural traditions, but seriously? Since when is there any scientific evidence that powdered rhino horn is any use in treating fevers and convulsions, conditions for which it is prescribed? And do you know how your ailments are often diagnosed by a Chinese Medicine practitioner? By looking at your tongue. Yes, my tongue is one of my favourite bodily appendages, as evidenced by the multitude of delicious foods I partook in while visiting South Africa (buffalo included). But do I believe that it is some kind of medical window into my health status? Um, no. Based on self-inflicted internet-based Chinese tongue diagnosis, I currently have a Yin deficiency (which can be treated with rhino horn) and something known as ‘Damp heat’. I don’t know what that is, but it sounds troubling.   

Something that infuriates me is the on-going trendiness in the Western world for embracing herbal remedies. Here’s the thing—if such remedies were scientifically proven to work, they would be what us scientists like to call ‘prescription drugs’. Otherwise, they are concoctions of herbs and who-knows-what-else mixed up based on unsupported assumptions about how the body and disease works. I couldn’t find any clinical trial which conclusively demonstrated that Chinese herbal therapy has any positive effect. Something there is proof of, though? Arsenic, lead and mercury poisoning as a result of these herbal remedies. Just because it is 'natural' doesn’t mean it isn’t going to kill you.

That’s not to say that some Chinese herbal medicines don’t work. The thing is, how are we meant to know if it remains unregulated and untested? All I know is I am not putting anything in my mouth that has been prescribed as a result of some doctor poking my tongue and requires a rare animal such as the black rhino to be poached to the brink of extinction to provide supposed ‘medicines’.

Sunday, 26 February 2012


Yersinia pestis holds the dubious title of the world's most devastating bacterial pathogen. While its glory days of the Black Death are thankfully a thing of the past, this pathogen remains a threat to human health to this day. A recent paper published in PNAS describes how the bacterium switches off the immune system in the lungs, going some way to explain why the pneumonic form of the Black Death is almost always fatal if untreated.

During the Middle Ages, the plague, or Black Death—so called because of the blackening of its victims' skin and blood—killed approximately a hundred million people across the world. In Europe, in particular, between thirty and sixty-percent of the population is believed to have perished. Although we now know that the bacterium responsible was transmitted by rat fleas, Europe in the Middle Ages was not known for having a sound grasp of science. Theories to explain the cause of the Black Death included a punishment from God, alignment of the planets, deliberate poisoning by other religions, or ‘bad air'. This final theory persisted for some time leading seventeenth-century doctors to don a bird-like mask filled with strong-smelling substances, such as herbs, vinegar or dried flowers, to keep away bad smells and, therefore, the plague.

While today we can cure the plague with antibiotics, historical treatments were as unreliable as the Middle Age's understanding of the disease. The characteristic swellings of a victim's lymph nodes were often treated by blood-letting and the application of butter, onion and garlic poultices. But such remedies did little to improve a victim's chances (even if it did make them smell delicious)—mortality rates varied between sixty and one-hundred percent depending on the form of the disease afflicting the patient. This led to the desperate population attempting far more extreme measures, such as medicines based on nothing but superstition including dried toad, or self-flagellation to calm their clearly angry gods.

The three predominant forms of the disease were described by a French musician named Louis Heyligen (who died of the plague in 1348):

"In the first people suffer an infection of the lungs, which leads to breathing difficulties. Whoever has this corruption or contamination to any extent cannot escape but will die within two days. Another form...in which boils erupt under the armpits,...a third form in which people of both sexes are attacked in the groin."

So anything involving the words "attacked in the groin" is clearly a bad thing. But these three forms of the plague come in different flavours of "bad". Of the three, bubonic plague with its unpleasant boils and swellings is the least fatal, killing around two-thirds of those infected. Whereas bubonic plague spreads throughout an infected person’s lymphatic system, septicaemic plague is an infection of the blood-system and is almost always fatal. The final form, the rarer pneumonic plague, also has a near one-hundred percent mortality rate and involves infection of the lungs, often occurring secondary to bubonic plague and capable of being spread from person-to-person.

One of the most interesting aspects of pneumonic plague is that the first 36 hours of infection involve rapid multiplication of the bacteria in the lungs but no immune response from the host. It is as if the immune system simply doesn’t notice the infection until it is too late to do anything about it. This ability to replicate completely beneath the immune system’s radar makes Y. pestis unique among other bacterial pathogens and a group from the University of North Carolina recently attempted to shed some more light on how Y. pestis achieves this feat, publishing their findings in PNAS.

So is Y. pestis's success down to a) an ability to hide from the immune system, or b) a deliberate suppression of the normal host response to a bacterial infection? To answer this question, the scientists coinfected mice with two strains of Y. pestis—one capable of causing plague in mice and one which is usually recognised and cleared by the immune system. If the bacteria are capable of modifying the conditions in the lung for their own benefit, it should be possible for a non-pathogenic mutant of Y. pestis to survive when co-infected with a virulent strain.




And this is exactly what the scientists found. In the above image, the green bacteria would normally be cleared by the immune system but, in the presence of the pathogenic red strain, they are able to survive. This suggests that the pathogenic Y. pestis is actively switching off the immune system, establishing a unique protective environment that allows even non-pathogenic organisms to prosper. The authors went on to show that this effect isn't limited to strains of plague—other species of bacteria not usually able to colonise the lung can also replicate unperturbed when present as a co-infection with Y. pestis.

Part of this immunosuppressive role is carried out by effectors injected into the host cell by a type III secretion system—a kind of bacterial hypodermic needle. But this isn’t the only mechanism involved and, unfortunately, determining exactly how Y. pestis establishes the permissive environment is proving difficult. The authors of the PNAS paper attempted to use a commonly used method to investigate which Y. pestis genes are vital for an infection to progress. TraSH screening is a really clever method which involves infecting an animal model with large pools of gene mutants and determining which mutants are lost over the time-course of the infection. In other bacterial species, it is every bacterium for itself and mutants with a defect in virulence fail to survive in the animal model, giving an insight into which genes are vital for infection. But this does not work well for Y. pestis due to the ability of virulent mutants to permit the growth of impaired mutants that, alone, would be unable to cause disease.


Screening for genes involved in infection - an animal model is infected with a pool of single mutants. Those mutants lost during infection are identified and the mutated gene used to learn more about what is required for an infection. This method does not work well with Y. pestis as the attenuated mutants can survive in the permissive lung environment created by the other mutants despite not being able to create this environment on its own.






Part of the modern-day interest in pneumonic plague is, unfortunately, the result of a human rather than a natural threat—bioterrorism. The Black Death bacterium has an unpleasant history of use as a weapon. As far back as 1346, the Tartars catapulted plague-ridden corpses over the city walls of their enemies and, unfortunately, as technology and science advanced, so did our abilities to use deadly-diseases against our enemies. During World War II, the Japanese dropped bombs containing plague-infected fleas on Chinese cities, and the Cold War saw both America and the USSR develop aerosolised Y. pestis. One of today’s concerns is that we don’t know what happened to all the weapons research carried out in the USSR, meaning that weaponised, antibiotic-resistant Y. pestis must be considered a potential bioterror threat. So understanding how the plague bacterium causes disease in humans is vital for the future development of new treatments and vaccines. And it is also a really interesting pathogen due to its unique way of ensuring it survives long enough in the host to be transmitted to other unfortunate victims.

Tuesday, 21 February 2012

There’s a lot in the news at the moment about a little boy who has been diagnosed with Gender Identity Disorder and is now living as a girl. I can’t quite decide how I feel about this. Part of me thinks it is awesome that his parents and teachers are being so supportive—god knows we could do with a bit more understanding when it comes to adults who identify with the opposite gender to the one their chromosomes dictate. But there’s another part of me that is: a) hugely disturbed about what the parents’ motives are in plastering this five-year-old all over the newspapers and internet, and b) worried that too much emphasis is put on a person being either ‘male’ or ‘female’, especially at such a young age.

Despite what certain media reports might tell you, there is no such thing as a ‘male brain’ or a ‘female brain’. The truth is, no one really knows how our minds decide to associate with one gender or the other—is it physical, or chemical, or psychological, or a mixture of all three? Our entire personality certainly isn’t a product of our genes, so why are we so fixated with this idea that we are born a certain, fixed way when it comes to gender identity? Most people would be furious to be told that their upbringing and experiences have had no effect on their personalities—of course we don’t arrive on Earth with all our views and personality quirks preformed. Yet, when it comes to complicated and controversial topics such as gender identity, many seem determined to relinquish all control over something so integral to who we are as a person. Of course there might be a biological or chemical cause(s) for Gender Identity Disorder–but can we honestly say cultural gender definitions play no role? 

I think my big problem comes down to society’s definitions of what makes a girl and what makes a boy, as if the two are set in stone. You don’t like playing with dolls? Yeah, you’re male. You like talking to people and are great at empathy? Ohhh, such a girl. It’s ridiculous. Especially when there is no evidence that traits such as these are intrinsically ‘male’ or ‘female’. Whenever there is a perfectly reasonable scientific study into the physical characteristics of the brains of men and women (some brain disorders have much higher rates in a particular sex, meaning we can’t ignore these differences), certain non-scientists insist on using the data to make sweeping generalisations about the sexes that reinforce stereotypes and are simply not backed up by the science. In reality, many of these supposed scientifically–supported gender differences are completely mythical.

Let’s start with the old favourite ‘brains develop differently in girls and boys’. A school in Florida is not unique in its support of single sex schooling, and backed up their policy with:

‘‘In girls, the language areas of the brain develop before the areas used for spatial relations and for geometry. In boys, it’s the other way around.’’ and ‘‘In girls, emotion is processed in the same area of the brain that processes language. So, it’s easy for most girls to talk about their emotions. In boys, the brain regions involved in talking are separate from the regions involved in feeling.’’

Is there any real scientific evidence for this? Nope. Turns out the early studies that led to this hypothesis have not been backed up by more detailed analyses. Yet so many people persist with the idea that ‘boys are better at maths, girls are better at emotions’ as if it is a known fact—and this ‘fact’ has made it’s way into policies that effect how kids are educated! And all that ‘girls develop faster than boys’? Yeah, that’s not backed up by the evidence either. Despite widespread beliefs, neuroscientists do not know of any distinct ‘male’ or ‘female’ circuits that can explain differences in behaviour between the sexes.

So basically studies into brain structure have yet to identify any specific difference between the brains of the two sexes that leads to a specific difference in behaviour. Yet boys and girls do behave differently if we take an average over an entire population. (And, yes, I realise averages are rubbish when it comes to making judgements on an individual level). Let’s use one of the most obvious and earliest differences as an example—appreciation of the colour pink. Was I to stick all Britain’s little girls into one blender and all the boys into another, the former mixture would average out at a pink colour with a sprinkling of hearts and ponies, and the latter would be camouflage with a shot of train fuel and maybe a gun poking out the top.

If there is no proof for the existence of a defined, biologically male or female brain at birth, how do we explain the differing colours of our average-child-smoothies? There's always the issue of what hormones we are exposed to in the womb or after birth, but could it also be that sex differences are shaped by our gender-differentiated experiences? Perhaps small differences in preferences become amplified over time as society, either deliberately or not, reinforces traditional gender stereotypes (Yay, my little boy kicked a ball—sports, sports, sports! Oh, he tried on my high heels? Yeah, let’s just ignore that). How much of our gender identity is truly hardwired into our brains from birth and how much is culturally created?

This is why I have a problem with the little boy diagnosed as ‘a girl trapped in a boy’s body’ that I mentioned at the start of this rambling monologue. By trying their best to define him as a ‘girl’ rather than as an individual, the parents and school are doing the exact same thing that they were trying to avoid—attempting to fit him into a gender-shaped box which, in reality, few people truly belong in. In the end, my own opinion does come down on the side of those trying to support this child (but not with the asshats using her to make money), but I am concerned that they are swapping one rigid set of gender rules for another. There's a lot more to being a woman than occasionally wanting to be a princess and surely a five-year-old has a long way to go before they can be accurately pigeon-holed, if at all.

In my perfect world, children would be allowed to experiment without anyone making any judgements or diagnoses (why do we need a medical term to make it acceptable for a small child to play around with wearing a dress, or growing their hair long?). That way, when they were mature enough, they would be free to make a balanced and personal decision on who they want to be and how they can best fit in with the rest of the world, including with our culturally defined ideals of gender.

Understanding how differences between the sexes emerge has the potential to tell us so much about the nature-nurture interaction, and could help us understand why some people associate so strongly with the opposite sex. But, unfortunately, it is open to careless interpretation by the media and public, who seem determined to use it to reinforce the gap between men and women rather than to tell us more about what shapes each of us a person. 

Further reading:
This is a really interesting article on neurological sex differences pulished in Cell by the author of Pink brain, blue brain: How Small Differences Grow into Troublesome Gaps – and What We Can Do About It, and some feminist perspectives on Sex and gender and trans issues.

Monday, 20 February 2012


I have a slight obsession with the sewers, which I don’t think is entirely normal or healthy. It’s the architecture more than the sewage itself but, as it happens, this post concerns the latter. Our tour of interesting things poo-related starts in London of 1858 and a period of history known as the Great Stink.

The first half of the 19th century saw the population of London soar to 2.5 million and that is a whole lot of sewage—something like 50 tonnes a day. It is estimated that before the Great Stink, there were around 200,000 cesspools distributed across London. Because it cost money to empty a cesspit, they would often overflow—cellars were flooded with sewage and, on more than one occasion, people are reported to have fallen through rotten floorboards and to have drowned in the cesspits beneath.

Sewage from the overflowing cesspits merged with factory and slaughterhouse waste, before ending up in the River Thames. By 1958, the Thames was overflowing with sewage and a particularly warm summer didn't help matters by encouraging the growth of bacteria. The resulting smell is hard to imagine, but it would have been particularly rich in rotten egg flavoured hydrogen sulphide and apparently got so bad that the House of Commons resorted to draping curtains soaked in chloride of lime in an attempt to block out the stench and even considered evacuating to a location outside the city.

At the same time, London was suffering from widespread outbreaks of cholera; a disease characterised by watery diarrhea, vomiting and, back in the 19th century, rapid death. But no one really knew where cholera came from. The most widely accepted theory was that it was spread by air-borne ‘miasma’, or ‘bad air’. Florence Nightingale was a proponent of this theory and worked hard to endure hospitals were kept fresh-smelling and that nurses would ‘keep the air [the patient] breathes as pure as the external air’. However, when it came to cholera, this theory was completely wrong.

A doctor called John Snow was one of the first people to suggest that the disease was transmitted by sewage-contaminated water—something of which there was a lot in 19th century London. Supporting his hypothesis was the 1854 cholera outbreak in Soho. During the first few days, 127 people on or near Broad Street died and, by the time the outbreak came to an end, the death toll was at 616 people. Dr Snow managed to identify the source as the public water pump on Broad Street and he convinced the council to remove the pump handle to stop any further infections (although it is thought the outbreak was already diminishing all by itself by this point).

From a 19th Century journalist on the problem of cholera in London:
A fatal case of cholera occurred at the end of 1852 in Ashby-street, close to the "Paradise" of King's-cross - a street without any drainage, and full of cesspools. This death took place in the back parlour on the ground floor abutting on the yard containing a foul cesspool and untrapped drain, and where the broken pavement, when pressed with the foot, yielded a black, pitchy, half liquid matter in all directions. The inhabitants, although Irish, agreed to attend to all advice given to them as far as they were able, and a coffin was offered to them by the parish. They said that they would like to wait until the next morning (it was on Thursday evening that the woman died), as the son was anxious, if he could raise the money, to bury his mother himself; but they agreed, contrary to their custom on such [-55-] occasions, to lock up the corpse at twelve o'clock at night, and allow no one to be in the room. On Friday, the day after death, the woman was buried, and so far it was creditable to these poor people, since they gave up their own desires and customs, which bade them retain the body.

George Godwin, 1854 - Chapter 9, via http://www.victorianlondon.org/index-2012.htm

The London sewage problem was finally addressed by the introduction of an extensive sewage system overseen by the engineer Joseph Bazalgette. In total, his team built 82 miles of underground sewers and 1,100 miles of street sewers at a cost of £4.2 million and taking nearly 10 years to complete.

London sewer system opening - via bbc

We now know that cholera is caused by a bacterium called Vibrio cholerae. In order to become pathogenic to humans, the originally environmental bacterium needs to acquire two bacteriophages (viruses that integrate into the bacterium’s genome)—one that provides the bacterium with the ability to attach to the host’s intestinal cells and one that leads to secretion of a toxin that results in the severe diarrhea associated with this disease.

Now I don’t often get teary-eyed at scientific meetings but, several years ago, a lecture by a guy called Richard Cash made me remember why I’d got into science in the first place. See, cholera is a disease which kills around 50-60% of those infected (sometimes within hours of the first symptoms) but with treatment, the mortality rate drops to less than 1%. And the reason that this disease is now almost completely curable is down to Professor Cash. The problem with cholera is that a patient can lose something like 20-30 litres of fluid a day and death occurs due to dehydration. So Cash and his team came up with an unbelievably simple solution—replace the patient’s fluid and electrolytes as quickly as they are lost. Oral rehydration therapy is a solution of salts and sugars, and is thought to have saved something like 60 million lives since its introduction. Patients who would have died within hours can now make a recovery within a day or two. Awesome, right?


Today, we tend to hear of cholera mainly when it is associated with natural disasters where contaminated water can spread disease throughout a region where the infrastructure has been severely compromised. One of the most recent outbreaks occurred nearly a year after the Haiti earthquake—cholera left over 6,00 dead and caused nearly 350,000 cases. But, prior to the outbreak, Haiti had been cholera-free for half a century. So where did it come from?


Image available from Wikipedia commons

I mentioned earlier that cholera can result from an environmental strain of bacteria acquiring the phages encoding virulence factors. But, unfortunately, the Haiti outbreak was actually brought into the country by the people trying to help rebuild following the earthquake. By comparing the DNA sequence of the outbreak strain with strains known to infect other parts of the world, it was possible to narrow down the source of the outbreak to Nepal. And UN peacekeepers from Nepal were known to be based near the river responsible for the first cases. It is highly likely that it was one of these soldiers who brought the disease to Haiti and this case demonstrates how quickly cholera can spread if gets into the water system. Lessons learnt from this outbreak will hopefully lead to visitors from cholera-endemic countries being vaccinated before travelling to post-disaster areas, even if they are showing no sign of the disease. After all, something close to 3 in 100 patients remain asymptomatic after infection.

The biggest obstacle in the way of eradicating cholera today is poor sanitation leading to contamination of drinking water. In some parts of the world, the link between hygiene and disease prevention is not as obvious as it is to us in the Western world. Cholera isn’t a disease which requires complicated drugs or vaccines to prevent—washing hands with soap, avoiding contact with human waste, and clean drinking water would make all the difference.